Azure AI articles https://gorovian.000webhostapp.com/?exam=t5/azure-ai/bg-p/AzureAIBlog Azure AI articles Mon, 18 Oct 2021 06:45:58 GMT AzureAIBlog 2021-10-18T06:45:58Z Building a Vaccination Verification system using Azure Form Recognizer https://gorovian.000webhostapp.com/?exam=t5/azure-ai/building-a-vaccination-verification-system-using-azure-form/ba-p/2839701 <H2><FONT size="4">Introduction</FONT></H2> <P><FONT size="2">As the COVID-19 pandemic continues, more and more organizations and venues are mandating vaccinations for individuals who choose to gather in large groups. In this context, one of the largest commercial real-estate companies approached me asking if I can help them scan vaccination cards of individuals, using Azure AI services. This put me on a journey to explore Azure Form Recognizer, part of Azure Applied AI Services, which uses AI to identify and extract key-value pairs and layout information like tables and selection marks from documents. My goal was to train a model to scan and validate vaccination cards with just a few samples in less than an hour. Form Recognizer is a great tool for such Robotic Process Automation (RPA) use cases where we need to automate the reading of physical documents without writing a lot of code or extensive data science expertise. Below are the steps I followed to build such a system.</FONT></P> <P>&nbsp;</P> <H5><FONT size="4">1. Setup Process</FONT></H5> <P><FONT size="2">In my effort to build a vaccination card reader, my first order of business was to procure some training data, i.e. pictures of vaccination cards. Thanks to my colleagues, friends, and the internet I was able to collect a few different vaccination cards from different states and with different brands of vaccines. Just for good measure I also took multiple pictures of some cards in different angles (including upside down) to ensure I built a robust model. Once I had about ten pictures, I uploaded them into an Azure Blob Storage Container to label them and train the model.</FONT></P> <P>&nbsp;</P> <P><FONT size="2"><STRONG><EM>Pro-tip: </EM></STRONG>To ensure we test the model with pictures the model hasn’t seen during the training process, do not upload all your pictures into the blob storage location for labeling and training, but set aside a few pictures on your local disk for testing purposes.&nbsp;</FONT></P> <P>&nbsp;</P> <P><FONT size="2">My next task was to stand up a Form Recognizer service in Azure, and it was a pretty straightforward process as I followed the instructions given in the <A href="#" target="_blank" rel="noopener">documentation</A>. I chose the Free F0 Pricing tier as this was just a test. The rest of the settings other than name of the service, location and resource group were left as defaults.</FONT></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic01.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316901iCE155AB43640F761/image-size/medium?v=v2&amp;px=400" role="button" title="pic01.png" alt="Figure 1: Creation of Form Recognizer Service" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 1: Creation of Form Recognizer Service</span></span></P> <P>&nbsp;</P> <P><FONT size="2">Once the service was created, I saved the Endpoint and Key which enable me to connect to the service and train the model.</FONT></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic02.png" style="width: 626px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316902i2387A3C566DF5D51/image-size/large?v=v2&amp;px=999" role="button" title="pic02.png" alt="Figure 2: Form Recognizer End Point and Keys" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 2: Form Recognizer End Point and Keys</span></span></P> <H3>2. Labeling and Training</H3> <P><FONT size="2">Now that I had the pictures and the AI service ready, I proceeded to label the pictures to show the system which portions of the card it needs to read and comprehend. For this I used the OCR (Optical Character Recognition) Form Labeling Tool, which is an open-source tool specifically built for this purpose available on <A href="#" target="_blank" rel="noopener">GitHub</A>. I found that the best way to use this tool locally on your windows machine is to install docker and run the docker container which hosts this application as described in the <A href="#" target="_blank" rel="noopener">documentation</A>. If you already have docker installed and running, you just need to run the below two commands and you can access the tool at the URL: <A href="#" target="_blank" rel="noopener">http://localhost:3000</A>.</FONT></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="powershell">docker pull mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1 docker run -it -p 3000:80 mcr.microsoft.com/azure-cognitive-services/custom-form/labeltool:latest-2.1 eula=accept</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><FONT size="2">The labeling tool is also available at&nbsp;<A href="#" target="_blank" rel="noopener">https://fott-2-1.azurewebsites.net/</A></FONT></P> <P>&nbsp;</P> <P><FONT size="2">Once I had the OCR tool up and running, I proceeded to create a project. To do so, I had to provide the following:</FONT></P> <OL> <LI><FONT size="2">The Form Recognizer service end point and key</FONT></LI> <LI><FONT size="2">Create a connection to the Azure Blob Storage Container</FONT></LI> </OL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic03.png" style="width: 626px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316910i96D5F01A5B524FB0/image-size/large?v=v2&amp;px=999" role="button" title="pic03.png" alt="Figure 3:&nbsp;OCR Tool Project Creation" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 3:&nbsp;OCR Tool Project Creation</span></span></P> <P>&nbsp;</P> <P><FONT size="2">While step 1 was easy using the endpoint and key, step 2 was more complex. First, I had to ensure the Resource Sharing (CORS) setting was correctly set on the blob storage account, then I had to ensure I generated the SAS URL <U>at the container level</U> and not at the storage account level.</FONT></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic04.png" style="width: 626px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316911iE6074E87BEC4E5FC/image-size/large?v=v2&amp;px=999" role="button" title="pic04.png" alt="Figure 4: Azure Blob Storage CORS setting" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 4: Azure Blob Storage CORS setting</span></span></P> <P>&nbsp;</P> <P><FONT size="2">Azure Blob Container Connection Settings were set as below</FONT></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic05.png" style="width: 626px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316912i5D9592BF1B0FE265/image-size/large?v=v2&amp;px=999" role="button" title="pic05.png" alt="Figure 5: OCR Tool Azure Blob Storage Connection settings" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 5: OCR Tool Azure Blob Storage Connection settings</span></span></P> <P>&nbsp;</P> <P><FONT size="2">Once the connectivity to the Form Recognizer service and the Blob Storage container was established, the documents automatically showed up in the labeling area. The tool was also smart enough to create bounding boxes and highlighted all text in yellow. All I had to do was create a tag on the top right corner, click on the relevant text segment, and click on the tag again to assign that tag to the text segment. The tagging process was very simple and took less than five minutes to tag all the pictures.</FONT></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic06.png" style="width: 624px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316914i48D7B0487A8293DE/image-size/large?v=v2&amp;px=999" role="button" title="pic06.png" alt="Figure 6: OCR Labeling Tool before Tag creation" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 6: OCR Labeling Tool before Tag creation</span></span></P> <P>&nbsp;</P> <P><FONT size="2"><EM><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic07.png" style="width: 624px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316915iE04033BFE006C24D/image-size/large?v=v2&amp;px=999" role="button" title="pic07.png" alt="Figure 7: OCR Labeling Tool after Tag creation and assignment" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 7: OCR Labeling Tool after Tag creation and assignment</span></span></EM></FONT></P> <P>&nbsp;</P> <P><FONT size="2">Once I had 5 documents labeled, I clicked on the Train icon and a model was created instantly. Given that I only had a few images to train on, the training process was only a few seconds.</FONT></P> <P>&nbsp;</P> <P><FONT size="2"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic08.png" style="width: 624px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316916iD20AD289A4D3A27F/image-size/large?v=v2&amp;px=999" role="button" title="pic08.png" alt="Figure 8: Training results" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 8: Training results</span></span></FONT></P> <P>&nbsp;</P> <H5><FONT size="4">3. Testing and Deployment</FONT></H5> <P><FONT size="2">Now it was time to test out my new model to see if it recognizes other vaccination cards. I had set aside a couple of images from the training process to validate the model. The OCR tool also helps with testing the model hosted on the Azure Form Recognizer service. All I had to do was click on the ‘Analyze’ menu item and upload an image from my local disk. The model performed amazingly well and was able to pick out the vaccine brand name even though certain parts of the card were hidden by a finger and the vaccine brand was handwritten instead of printed.</FONT></P> <P>&nbsp;</P> <P><FONT size="2"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic09.png" style="width: 626px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316919iBC676D599A2CDCB9/image-size/large?v=v2&amp;px=999" role="button" title="pic09.png" alt="Figure 9: OCR Analyzes a new vaccination card" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 9: OCR Analyzes a new vaccination card</span></span></FONT></P> <P>&nbsp;</P> <P><FONT size="2">The model was spot-on so far, but I was curious to see if this model could identify a fake vaccination card, such as that of this recently reported case of a traveler to Hawaii? I put the model to the test, and the results were mixed. For one, this model was not able to pick up the CDC logo text and the vaccine brand name correctly as this card looks characteristically different from a standard vaccination card. In that sense, it was able to flag that this card looks different and needs to be examined further to confirm the veracity of the card.</FONT></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pic10.png" style="width: 624px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316921i216646863E765F86/image-size/large?v=v2&amp;px=999" role="button" title="pic10.png" alt="Figure 10: Fake Vaccination card" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 10: Fake Vaccination card</span></span></P> <P>&nbsp;</P> <H5><FONT size="4">Conclusion</FONT></H5> <P><FONT size="2">The model performed incredibly well even though we built the model with less than ten images. One way to enhance the performance of the model would be train separate models for different kinds of vaccination cards (handwritten, printed stickers, non-USA etc.) and <A href="#" target="_blank" rel="noopener">compose</A> a hybrid model to handle all types of cards globally. Going back to the commercial real estate company that wanted to implement this solution: they tested out Azure Form Recognizer and found it to be better than an off the shelf RPA product they also tested. They are currently working on building a system to read vaccination cards of their employees to record their vaccination status. Overall, this was a quick and easy way to build a model to help organizations responsible for verifying vaccination status.</FONT></P> <P>&nbsp;</P> <H5><FONT size="4">Do it yourself:&nbsp;</FONT><FONT size="2">Try out Azure Form Recognizer for your own organization’s unique use cases! Check out the <A href="#" target="_blank" rel="noopener">quick start guide</A> for a step by step guide.</FONT></H5> Thu, 14 Oct 2021 16:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/building-a-vaccination-verification-system-using-azure-form/ba-p/2839701 Abraham-Pabbathi 2021-10-14T16:00:00Z What's new in Form Recognizer: new Document API, Signature detection, 122 Languages and lots more https://gorovian.000webhostapp.com/?exam=t5/azure-ai/what-s-new-in-form-recognizer-new-document-api-signature/ba-p/2835372 <P>Form Recognizer is an AI service that provides pre-built &nbsp;or custom models to extract information from documents. Today, customers can take advantage of a new set of preview capabilities that enhance your document process automation or knowledge mining capabilities. This release is packed with new features and updates.</P> <P>&nbsp;</P> <H1>What’s New in Form Recognizer</H1> <P>&nbsp;</P> <H2>Document API</H2> <P><STRONG><A href="#" target="_blank" rel="noopener">General document API</A></STRONG> uses a pretrained model to extract text, tables, structure key value pairs and entities from a form or document. With general document, you no longer need to train a model to extract key value pairs that can be inferred from the structure or content of most documents. Start with the <A href="#" target="_blank" rel="noopener">general document</A> overview to learn more about this feature or to test the API in the new <A href="#" target="_blank" rel="noopener">Form Recognizer Studio</A>.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="GeneralDocument.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316563i128EFD3C8512D41E/image-size/large?v=v2&amp;px=999" role="button" title="GeneralDocument.png" alt="Try General Document in the Form Recognizer Studio" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Try General Document in the Form Recognizer Studio</span></span></P> <P>&nbsp;</P> <P>The new General Document API is only available on the latest version of the <A href="#" target="_blank" rel="noopener">REST API</A> which has been redesigned for better usability. The <A href="#" target="_blank" rel="noopener">migration guide</A> describes the differences between the API versions and how you can start using the new API version.</P> <H3>Code Examples</H3> <H4>REST API</H4> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">curl -v -i POST "https://{endpoint}/formrecognizer/documentModels/prebuilt-document:analyze?api-version=2021-09-30-preview&amp;api-version=2021-09-30-preview HTTP/1.1" -H "Content-Type: applicationhttps://techcommunity.microsoft.com/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{​​​​​​​'source': '{your-document-url}'}​​​​​​​​"</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>A successful request will return a operation location header that will contain the URL to get the result of the operation when complete.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">curl -v -i https://{endpoint}/formrecognizer/documentModels/prebuilt-document/analyzeResults/{operation}?api-version=2021-09-30-preview -H "Content-Type: applicationhttps://techcommunity.microsoft.com/json" -H "Ocp-apim-subscription-key: {api key}"</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H4>C# Sample</H4> <OL> <LI>Install the C# SDK<LI-CODE lang="csharp">dotnet add package Azure.AI.FormRecognizer</LI-CODE></LI> <LI>Authenticate the client<LI-CODE lang="csharp">string endpoint = "&lt;your-endpoint&gt;"; string apiKey = "&lt;your-apiKey&gt;"; var credential = new AzureKeyCredential(apiKey); var client = new DocumentAnalysisClient(new Uri(endpoint), credential); </LI-CODE></LI> <LI> <P>Analyze a document with General Document API</P> <LI-CODE lang="csharp">string fileUri = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"; AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-document", fileUri); await operation.WaitForCompletionAsync(); AnalyzeResult result = operation.Value; Console.WriteLine("Detected entities:"); foreach (DocumentEntity entity in result.Entities) { if (entity.SubCategory == null) { Console.WriteLine($" Found entity '{entity.Content}' with category '{entity.Category}'."); } else { Console.WriteLine($" Found entity '{entity.Content}' with category '{entity.Category}' and sub-category '{entity.SubCategory}'."); } } Console.WriteLine("Detected key-value pairs:"); foreach (DocumentKeyValuePair kvp in result.KeyValuePairs) { if (kvp.Value.Content == null) { Console.WriteLine($" Found key with no value: '{kvp.Key.Content}'"); } else { Console.WriteLine($" Found key-value pair: '{kvp.Key.Content}' and '{kvp.Value.Content}'"); } } </LI-CODE></LI> </OL> <P>&nbsp;</P> <P>&nbsp;For more information see <A href="#" target="_self">General Document API&nbsp;</A></P> <P>&nbsp;</P> <H2>Signature detection</H2> <P>&nbsp;Signatures can now be detected in form fields in a custom form model. Signature field is a new field type in custom models that detects if a signature exists in the specified field. In addition to key value pairs, tables and selection marks you can now also train a model to detect signature in documents.&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CustomFormSignature.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316569iC0AFCAD8C448297D/image-size/large?v=v2&amp;px=999" role="button" title="CustomFormSignature.png" alt="Labeling experience for signature field in custom forms" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Labeling experience for signature field in custom forms</span></span></P> <P>&nbsp;</P> <P>To learn more about signature detection, see <A href="#" target="_blank" rel="noopener">custom and composed models</A>.</P> <H2>Language Expansion</H2> <P>Printed text extraction covers a total of <STRONG>122 languages </STRONG>with the addition of 49 new languages including Russian and other Cyrillic and Latin languages. Handwritten text extraction now supports Chinese, French, German, Italian, Portuguese, and Spanish in addition to the existing English handwritten support. For the full list of supported languages see <A href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <H2>Hotel Receipts</H2> <P>Support for<STRONG> Hotel receipts</STRONG> is now available in the <SPAN><A href="#" target="_blank" rel="noopener">Receipt model</A>. </SPAN>You can now automatically process a hotel receipt and extract the key value pairs required, such as date of arrival and date of departure and line items.&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Prebuilt-receipt.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316571i294FACE2C4A19D80/image-size/large?v=v2&amp;px=999" role="button" title="Prebuilt-receipt.png" alt="Results for an analyzed hotel receipt in the Form Recognizer Studio" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Results for an analyzed hotel receipt in the Form Recognizer Studio</span></span></P> <P>&nbsp;</P> <P>Learn more about the receipt model&nbsp;<A href="#" target="_blank" rel="noopener">here</A>.</P> <H2>Pre-built ID</H2> <P>The ID pre-built model now recognizes additional fields within the US driver’s license such as endorsements, restrictions, and vehicle classification.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Prebuilt-id.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316572iDAF6D812D4AB57AE/image-size/large?v=v2&amp;px=999" role="button" title="Prebuilt-id.png" alt="Results for analyzed prebuilt id in the Form Recognizer Studio" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Results for analyzed prebuilt id in the Form Recognizer Studio</span></span></P> <P>&nbsp;</P> <P>Learn more about the ID document model&nbsp;<A href="#" target="_blank" rel="noopener">here</A>.</P> <H2>New Form Recognizer Studio, REST API &amp; Updated SDK</H2> <P><A href="#" target="_self">Form Recognizer Studio</A> simplifies the use of the service, enabling testing pre-built models, testing pre-trained models, and building and testing custom models. As the service expands, the REST API has been redesigned for improved usability, the <A href="#" target="_blank" rel="noopener">migration guide</A> will help you transition to the new API.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="studio.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/316573iFCF0DA098983F838/image-size/large?v=v2&amp;px=999" role="button" title="studio.png" alt="The new Form Recognizer Studio to test, train and analyze document models" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">The new Form Recognizer Studio to test, train and analyze document models</span></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H1>Get started&nbsp;</H1> <UL> <LI>Try out the new <A href="#" target="_self">Form Recognizer Studio</A></LI> <LI>Get started with the new <A href="#" target="_self">SDK </A>or &nbsp;<A href="#" target="_self">REST API</A></LI> </UL> <P>Form Recognizer continues to improve AI quality and service performance. If you have any questions or feedback on either the preview APIs or the service, please contact us via <A href="https://gorovian.000webhostapp.com/?exam=mailto:formrecog_contact@microsoft.com" target="_blank" rel="noopener">email</A>.</P> Thu, 14 Oct 2021 07:46:28 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/what-s-new-in-form-recognizer-new-document-api-signature/ba-p/2835372 Vinod Kurpad 2021-10-14T07:46:28Z Announcing Automated ML (AutoML) for Images https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-automated-ml-automl-for-images/ba-p/2843034 <P>We are excited to announce the Public Preview of automated ML (AutoML) for Images within Azure Machine Learning (Azure ML). This new capability boosts data scientist productivity when building computer vision models for tasks such as image classification, object detection and instance segmentation.</P> <P>&nbsp;</P> <P>Customers across various industries are looking to leverage machine learning to build models that can process image data. Applications range from image classification of fashion photos to PPE detection in industrial environments. The ideal solution will allow users to easily build models, control the model training to optimize model performance, and offer a way to easily manage these ML models end-to-end. While Azure Machine Learning offers a solution for managing the end-to-end ML lifecycle, customers have traditionally had to rely on the tedious process of custom training their image models. Iteratively finding the right set of model algorithms and hyperparameters for these scenarios typically requires significant data scientist effort.</P> <P>&nbsp;</P> <P>With AutoML support for computer vision tasks, Azure ML customers can now easily build models trained on image data, without writing any training code. Customers can seamlessly integrate with Azure ML's data labeling capability and use this labeled data for generating image models. They can control the model generated, selecting from a variety of state of the art algorithms and can optionally tune the hyperparameters to optimize model performance. The resulting model can then be deployed as a web service in Azure ML or downloaded for local use and can be operationalized at scale by leveraging Azure ML’s MLOps capabilities.</P> <P>Authoring AutoML models for computer vision tasks is currently supported via the Azure ML Python SDK. The resulting experimentation runs, models and outputs are accessible from the Azure ML Studio.</P> <P>&nbsp;</P> <P>Following is a summary of features and benefits of AutoML for Images -</P> <UL> <LI><STRONG>Support for multiple computer vision tasks</STRONG></LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="swatig_0-1634147194009.jpeg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317118i738AA6573D67E557/image-size/large?v=v2&amp;px=999" role="button" title="swatig_0-1634147194009.jpeg" alt="swatig_0-1634147194009.jpeg" /></span></P> <P><EM>Image from: </EM><A href="#" target="_blank" rel="noopener"><EM>http://cs231n.stanford.edu/slides/2021/lecture_15.pdf</EM></A></P> <UL> <LI>Multi-class image classification is used when an image is classified with only a single label from a set of classes - e.g. each image is classified as either an image of a 'cat' or a 'dog' or a 'duck'.</LI> <LI>Multi-label image classification is used when an image could have one or more labels from a set of labels - e.g. an image could be labeled with both 'cat' and 'dog'.</LI> <LI>Object detection is used to identify objects in an image and locate each object with a bounding box e.g. locate all dogs and cats in an image and draw a bounding box around each.</LI> <LI>Instance segmentation is used to identify objects in an image at the pixel level, drawing a polygon around each object in the image.</LI> </UL> <P>&nbsp;</P> <UL> <LI><STRONG>Seamless integration with Azure ML Data Labeling </STRONG></LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="swatig_0-1634147757069.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317120iF808A1D00064D87B/image-size/large?v=v2&amp;px=999" role="button" title="swatig_0-1634147757069.png" alt="swatig_0-1634147757069.png" /></span></P> <P>Use <A href="#" target="_blank" rel="noopener">Azure ML data labeling</A> to manage labeling your image data. Co-ordinate data, labels and team members to efficiently manage labeling tasks. Export the labeled data to an Azure ML Dataset, that can be used to train your computer vision model.</P> <P>&nbsp;</P> <UL> <LI><STRONG>State-of-the-art algorithms for computer vision tasks </STRONG>Choose from a variety of state-of-the-art open source algorithms for your task. Supported algorithms include - <UL> <LI><STRONG>Image Classification: </STRONG>ViT, SEResNext, ResNet, ResNeSt, MobileNet&nbsp;</LI> <LI><STRONG>Object Detection: </STRONG>YoloV5, Faster-RCNN, RetinaNet</LI> <LI><STRONG>Instance segmentation: </STRONG>MaskRCNN</LI> </UL> </LI> </UL> <P>You can either specify a single model algorithm or explore and compare multiple algorithms in a single AutoML run.</P> <P>&nbsp;</P> <UL> <LI><STRONG>Optimize model performance through hyperparameter tuning </STRONG>When training computer vision models, model performance depends heavily on the hyperparameter values selected. Often, you might want to tune the hyperparameters to get optimal performance. AutoML for Images exposes a range of hyperparameters that can be easily tuned to get the best performance from your model. While many of the hyperparameters exposed are model agnostic, there are also several task-specific and model-specific hyperparameters that you can tune.</LI> </UL> <P>You can optionally sweep across multiple algorithms and hyperparameters in a single AutoML run, to find the optimal settings for your model. This feature applies the hyperparameter tuning capabilities in Azure Machine Learning, allowing you to control sampling methods, early termination policies and resources spent on the sweep. A sample configuration showing how to leverage this capability is included below -</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="swatig_0-1634149405511.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317131i59A2EACFBA4A40E3/image-size/large?v=v2&amp;px=999" role="button" title="swatig_0-1634149405511.png" alt="swatig_0-1634149405511.png" /></span></P> <P>&nbsp;</P> <UL> <LI><STRONG>Deploy model to the cloud or download for local use </STRONG>Once your model training completes, you can easily deploy the model as an AzureML web service using AKS or ACI for this model deployment</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="swatig_3-1634147194033.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/317119i9711D656BDB84C55/image-size/large?v=v2&amp;px=999" role="button" title="swatig_3-1634147194033.png" alt="swatig_3-1634147194033.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <UL> <LI><STRONG>Operationalize model at scale leveraging Azure Machine Learning MLOps </STRONG>Manage the end-to-end model lifecycle including batch scoring, automated retraining, etc. using <A href="#" target="_blank" rel="noopener">Azure ML’s MLOps capabilities</A>.</LI> </UL> <P><STRONG>Summary</STRONG></P> <P>In summary, you can use AutoML for Images to easily build and optimize computer vision models, while offering flexibility and control over the entire model training and deployment process. Please give it a try and share your feedback with us.</P> <P>&nbsp;</P> <P><STRONG>Learn More</STRONG></P> <P><A href="#" target="_blank" rel="noopener">What is AutoML</A></P> <P><A href="#" target="_blank" rel="noopener">How to auto-train an image model</A></P> <P><STRONG>Sample notebooks</STRONG></P> <UL> <LI><A href="#" target="_blank" rel="noopener">Multi-class Classification Notebook</A></LI> <LI><A href="#" target="_blank" rel="noopener">Multi-label Classification Notebook</A></LI> <LI><A href="#" target="_blank" rel="noopener">Object Detection Notebook</A></LI> <LI><A href="#" target="_blank" rel="noopener">Instance Segmentation Notebook</A></LI> <LI><A href="#" target="_blank" rel="noopener">Batch scoring Notebook</A></LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Wed, 13 Oct 2021 18:24:23 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-automated-ml-automl-for-images/ba-p/2843034 swatig 2021-10-13T18:24:23Z Customize a translation to make sense in a specific context https://gorovian.000webhostapp.com/?exam=t5/azure-ai/customize-a-translation-to-make-sense-in-a-specific-context/ba-p/2811956 <P><STRONG><SPAN class="TextRun Highlight SCXW104075101 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW104075101 BCX8">Translator is a cloud-based machine translation service you can use to translate text in</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">to 100+ language</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">s</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">&nbsp;with a&nbsp;</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">few&nbsp;</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">simple REST API call</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">s</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">.</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">&nbsp;Out of the box the translator API supports many languages and features</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">, but&nbsp;</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">there are sometimes some scenarios where the translation doesn’t really fit the&nbsp;</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">domain language, the solve this you can use the Custom Translator to train your own cus</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">tomized model. In this article we will dive into how the translator and the custom translator works</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">, talk about&nbsp;</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">when to use&nbsp;</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">which,</SPAN><SPAN class="NormalTextRun SCXW104075101 BCX8">&nbsp;and point you to the right documentation to get started.</SPAN></SPAN><SPAN class="EOP SCXW104075101 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></STRONG></P> <H2 aria-level="2"><SPAN data-contrast="none">Let’s start with the basics</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">First you need to setup a&nbsp;Translator&nbsp;resource in Azure.&nbsp;</SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN data-contrast="none">To get started with Azure you can get a&nbsp;started with a <A href="#" target="_self">free Azure account here</A></SPAN><SPAN data-contrast="none">.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">The easiest way to setup a Translator resource is using the <A href="#" target="_self">Azure CLI</A>&nbsp;</SPAN><SPAN data-contrast="none">or follow this link to <A href="#" target="_self">create the resource in the portal.</A></SPAN></P> <P>&nbsp;</P> <LI-CODE lang="bash">az cognitiveservices account create --name Translator --resource-group Translator_RG --kind TextTranslation --sku S1 --location westeurope </LI-CODE> <P>&nbsp;</P> <P><SPAN class="TextRun Highlight SCXW109427085 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW109427085 BCX8">For the pricing (</SPAN><SPAN class="NormalTextRun SpellingErrorV2 SCXW109427085 BCX8">sku</SPAN><SPAN class="NormalTextRun SCXW109427085 BCX8">) you have a few options</SPAN><SPAN class="NormalTextRun SCXW109427085 BCX8">. A free one</SPAN><SPAN class="NormalTextRun SCXW109427085 BCX8">, to get you started, a pay-as-you go</SPAN><SPAN class="NormalTextRun SCXW109427085 BCX8"><SPAN>&nbsp;</SPAN>one, here you only pay for what you use. Or if you<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW109427085 BCX8">know you are going to use the service a lot there are also pre-paid packages.</SPAN><SPAN class="NormalTextRun SCXW109427085 BCX8"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW109427085 BCX8">More information on billing and costs <A href="#" target="_self">can be found here</A>.</SPAN></SPAN></P> <H2><SPAN data-contrast="none">Hello world&nbsp;with&nbsp;Text Translation</SPAN><SPAN>&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">Text translation is a cloud-based REST API feature of the Translator service that uses neural machine translation technology to enable quick and accurate source-to-target text translation in real time across all <A href="#" target="_self">supported languages</A>.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">In this example we will&nbsp;use <A href="#" target="_self">Visual Studio Code</A> with the <A href="#" target="_self">REST Client extension</A> to call the Text Translation API.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Use the Azure CLI to retrieve the subscription keys:</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="bash">az cognitiveservices account keys list --name Translator --resource-group Translator_RG </LI-CODE> <P>&nbsp;</P> <P><SPAN data-contrast="none">Next you need to POST the request to the translator API.</SPAN><SPAN>&nbsp;</SPAN></P> <P><SPAN data-contrast="none">In the URL you specify the&nbsp;from and to&nbsp;languages&nbsp;and in the&nbsp;body&nbsp;you send the text that you want to translate.&nbsp;If you leave the&nbsp;“from”&nbsp;parameter,&nbsp;the&nbsp;api&nbsp;will detect the language automatically.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">The array&nbsp;with the text in the JSON body&nbsp;can have at most 100 elements&nbsp;and the entire text included in the request cannot exceed 10,000 characters including spaces.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="basic">POST https://api.cognitive.microsofttranslator.com/translate         ?api-version=3.0         &amp;from=en         &amp;to=tlh-Latn         &amp;to=nl content-type: applicationhttps://techcommunity.microsoft.com/json Ocp-Apim-Subscription-Key: &lt;insert_subscription_key&gt; Ocp-Apim-Subscription-Region: westeurope [{     "text": "Hello World" }] </LI-CODE> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hboelman_0-1633420936450.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/315199i5DC20AC440899DB8/image-size/large?v=v2&amp;px=999" role="button" title="hboelman_0-1633420936450.png" alt="hboelman_0-1633420936450.png" /></span></P> <H2 aria-level="2"><SPAN data-contrast="none">What about bigger documents?</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN><SPAN data-contrast="none">If your content exceeds the&nbsp;10.000 character&nbsp;limit&nbsp;and&nbsp;you want to keep your&nbsp;original document structure you can use&nbsp;the “Document Translation” API.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><I><SPAN data-contrast="none">Document Translation is a cloud-based feature of the </SPAN></I><A href="#" target="_blank" rel="noopener"><I><SPAN data-contrast="none">Azure Translator</SPAN></I></A><I><SPAN data-contrast="none"> service and is part of the Azure Cognitive Service family of REST APIs. In this overview, you’ll learn how the Document Translation API can be used to translate multiple and complex documents across all </SPAN></I><A href="#" target="_blank" rel="noopener"><I><SPAN data-contrast="none">supported languages and dialects</SPAN></I></A><I><SPAN data-contrast="none"> while preserving original document structure and data format.</SPAN></I><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">To use this&nbsp;service,&nbsp;you need to upload the source document to a blob storage&nbsp;and provide a&nbsp;space for the target documents. When you have that settled you can send the request to the&nbsp;API.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none"><A href="#" target="_self">Read more about this API&nbsp;here</A>.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H2 aria-level="2"><SPAN data-contrast="none">Customize the translation to make sense in a specific context</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">Now we have covered how to get started with out of the box translation, but in every language&nbsp;and context&nbsp;there might be a need to customize to enforce specific domain terminology.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Depending upon the context, you can use standard text translation API available customization tools or train your own domain specific custom engines.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">First let’s&nbsp;take a look&nbsp;what can be done without&nbsp;needing to train your own custom model.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H2 aria-level="2"><SPAN data-contrast="none">Profanity filtering</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">Normally the Translator service retains profanity that is present in the source in the translation. The degree of profanity and the context that makes words profane differ between cultures. As a result, the degree of profanity in the target language may be amplified or reduced.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">If you want to avoid seeing profanity in the translation, even if profanity is present in the source text, use the profanity filtering option available in the&nbsp;Translate() method. This option allows you to choose whether you want to see profanity deleted, marked with appropriate tags, or take no action taken.</SPAN><SPAN>&nbsp;<BR /></SPAN><A href="#" target="_self"><SPAN data-contrast="none">Read more here</SPAN></A></P> <H2 aria-level="2"><SPAN data-contrast="none">Prevent translation of content</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">The Translator allows you to tag content so that it isn't translated. For example, you may want to tag code, a brand name, or a word/phrase that doesn't make sense when localized.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><A href="#" target="_self"><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">Read more here&nbsp;</SPAN></A></P> <H2 aria-level="2"><SPAN data-contrast="none">Dynamic dictionary</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">If you already know the translation you want to apply to a&nbsp;word, a phrase or a sentence, you can supply it as markup within the request.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><A href="#" target="_self"><SPAN data-contrast="none">Read more here</SPAN></A></P> <H2 aria-level="2"><SPAN data-contrast="none">I need more customization</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">If the above features are not&nbsp;sufficient,&nbsp;you can start with Custom Translator&nbsp;to train a specific model based on&nbsp;previously translated documents&nbsp;in your domain.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H2 aria-level="2"><SPAN data-contrast="none">What is Custom Translator?</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">Custom Translator is a feature of the Microsoft Translator service, which enables Translator enterprises, app developers, and language service providers to build customized neural machine translation (NMT) systems. The customized translation systems seamlessly integrate into existing applications, workflows, and websites.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Custom Translator supports more than three dozen languages, and maps directly to the languages available for NMT. For a complete list, see <A href="#" target="_self">Microsoft Translator Languages</A>.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <H2 aria-level="2"><SPAN data-contrast="none">Highlights</SPAN><SPAN><BR /></SPAN></H2> <H3><STRONG><SPAN data-contrast="auto">Collaborate with others</SPAN></STRONG></H3> <P><SPAN data-contrast="auto">Collaborate with your team by sharing your work with different people.</SPAN><SPAN>&nbsp;<BR /></SPAN><A href="#" target="_self">Read more</A></P> <H3><STRONG><SPAN data-contrast="auto">Zero downtime deployments of the model</SPAN></STRONG><SPAN>&nbsp;</SPAN></H3> <P><SPAN data-contrast="auto">Deploy a new version of the model without any downtime, using the in-build deployments slots.</SPAN><SPAN>&nbsp;<BR /></SPAN><A href="#" target="_self">Read more</A></P> <H3><STRONG><SPAN data-contrast="auto">Use a dictionary to build your models</SPAN></STRONG></H3> <P><SPAN data-contrast="auto">If you don't have&nbsp;training&nbsp;data set, you can train a model with only dictionary data.</SPAN><SPAN>&nbsp;<BR /></SPAN><A href="#" target="_self">Read more</A></P> <H3><STRONG><SPAN data-contrast="auto">Build systems that knows your business terminology</SPAN></STRONG></H3> <P><SPAN data-contrast="auto">Customize and build translation systems using parallel documents, that understand the terminologies used in your own business and industry.</SPAN><SPAN>&nbsp;<BR /></SPAN><A href="#" target="_self">Read more</A></P> <H2 aria-level="2"><SPAN data-contrast="none">Get started</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="auto">To get started get your&nbsp;Translator endpoint setup (see top of this blog) and go the <A href="#" target="_self">Custom Translator portal</A>.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">When you are in setup your workspace by selecting a region, entering your translator subscription key and give it a name.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">A <A href="#" target="_self">complete&nbsp;quick start</A>&nbsp;can be found on Microsoft Docs.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><LI-VIDEO vid="https://www.youtube.com/watch?v=TykB6WDTkRc" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="http://i.ytimg.com/vi/TykB6WDTkRc/hqdefault.jpg" external="url"></LI-VIDEO></SPAN></P> <H2 aria-level="2"><SPAN data-contrast="none">Best practices</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="auto">We will&nbsp;close of&nbsp;the blog with a few best practices and tips on how to get the best model using the Custom Translation Service.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H3><SPAN data-contrast="none">Training material</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <TABLE data-tablestyle="MsoTableGrid" data-tablelook="1056" aria-rowcount="6"> <TBODY> <TR aria-rowindex="1"> <TD width="160.188px" height="30px" data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">What goes in</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="244.422px" height="30px" data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">What it does</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="567.391px" height="30px" data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">Rules to follow</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="2"> <TD width="160.188px" height="30px" data-celllook="69905"> <P><SPAN data-contrast="none">Tuning documents</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="244.422px" height="30px" data-celllook="69905"> <P><SPAN data-contrast="none">Train the NMT parameters.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD colspan="1" rowspan="2" width="567.391px" height="87px" data-celllook="69905"> <P><SPAN data-contrast="none">Be&nbsp;</SPAN><STRONG><SPAN data-contrast="none">strict</SPAN></STRONG><SPAN data-contrast="none">. Compose them to be optimally representative of what you are going to translate&nbsp;</SPAN><STRONG><I><SPAN data-contrast="none">in the future</SPAN></I></STRONG><SPAN data-contrast="none">.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="3"> <TD width="160.188px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Test documents</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="244.422px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Calculate the&nbsp;<A href="#" target="_self">BLEU score</A>&nbsp;– just for you.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="4"> <TD width="160.188px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Phrase dictionary</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="244.422px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Forces the given translation with a probability of 1.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="567.391px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Be&nbsp;</SPAN><STRONG><SPAN data-contrast="none">restrictive</SPAN></STRONG><SPAN data-contrast="none">. Case-sensitive and safe to use&nbsp;</SPAN><STRONG><I><SPAN data-contrast="none">only&nbsp;</SPAN></I></STRONG><SPAN data-contrast="none">for compound nouns and named entities. Better to not use and let the system learn.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="5"> <TD width="160.188px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Sentence dictionary</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="244.422px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Forces the given translation with a probability of 1.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="567.391px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Case-insensitive and good for common in domain short sentences.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="6"> <TD width="160.188px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Bilingual training documents</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="244.422px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Teaches the system how to translate.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD width="567.391px" height="57px" data-celllook="69905"> <P><SPAN data-contrast="none">Be&nbsp;</SPAN><STRONG><SPAN data-contrast="none">liberal</SPAN></STRONG><SPAN data-contrast="none">. Any in-domain human translation is better than MT. Add and remove documents as you go and try to improve the score.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P><STRONG><SPAN class="TextRun SCXW253794439 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW253794439 BCX8">BLEU Score</SPAN></SPAN></STRONG><SPAN class="LineBreakBlob BlobObject DragDrop SCXW253794439 BCX8"><STRONG><SPAN class="SCXW253794439 BCX8">&nbsp;<BR /></SPAN></STRONG></SPAN><SPAN class="TextRun SCXW253794439 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW253794439 BCX8">BLEU&nbsp;is the industry standard method for evaluating the “precision” or accuracy of the translation model. Though other methods of evaluation exist, Microsoft Translator<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun ContextualSpellingAndGrammarErrorV2 SCXW253794439 BCX8">relies</SPAN><SPAN class="NormalTextRun SCXW253794439 BCX8"><SPAN>&nbsp;</SPAN>BLEU method to report accuracy to Project Owners.</SPAN></SPAN><SPAN class="LineBreakBlob BlobObject DragDrop SCXW253794439 BCX8"><SPAN class="SCXW253794439 BCX8">&nbsp;<BR /></SPAN><A href="#" target="_self">Read more</A><BR class="SCXW253794439 BCX8" /></SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <TABLE data-tablestyle="MsoTableGrid" data-tablelook="1056" aria-rowcount="9"> <TBODY> <TR aria-rowindex="1"> <TD data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">Suitable</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">Unsuitable</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="2"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Prose</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Dictionaries/Glossaries (exceptions)</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="3"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">5-18 words length</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Social media (target side)</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="4"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Correct spelling (target side)</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Poems/Literature</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="5"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">In domain (be liberal)</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Fragments (OCR, ASR)</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="6"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">&gt; 20,000 sentences (~200K words)</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Subtitles (exceptions)</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="7"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Monolingual material: Industry</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="8"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">TAUS, LDC, Consortia</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="9"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Start large, then specialize by removing</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> </TBODY> </TABLE> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <TABLE data-tablestyle="MsoTableGrid" data-tablelook="1056" aria-rowcount="5"> <TBODY> <TR aria-rowindex="1"> <TD data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">Question</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">More questions and answers</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="2"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">What constitutes a domain?</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Documents of similar terminology,&nbsp;style&nbsp;and register.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="3"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Should I make a big domain with everything, or smaller domains per product or project?</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Depends.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="4"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">On what?</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">I don’t know. Try it out.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="5"> <TD data-celllook="69905"> <P><SPAN data-contrast="none">What do you recommend?</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="none">Throw all training data together, have specific tuning and test sets, and observe the delta between training results. If delta is &lt; 1 BLEU point, it is not worth maintaining multiple systems.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> </TBODY> </TABLE> <P><STRONG><SPAN data-contrast="auto">Auto or Manual Selection of Tuning and Test</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="auto">Goal</SPAN></STRONG><SPAN data-contrast="auto">: Tuning and test sentences are optimally representative of what you are going to translate in the future.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:216}">&nbsp;</SPAN></P> <TABLE data-tablestyle="MsoTableGrid" data-tablelook="1056" aria-rowcount="2"> <TBODY> <TR aria-rowindex="1"> <TD data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">Auto Selection</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><STRONG><SPAN data-contrast="none">Manual Selection</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> <TR aria-rowindex="2"> <TD data-celllook="69905"> <P><SPAN data-contrast="auto">+</SPAN><SPAN data-contrast="none">Convenient.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:533,&quot;335559740&quot;:216,&quot;335559991&quot;:533}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">+</SPAN><SPAN data-contrast="none">Good if you know that your training data is representative of what you are going to translate.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:533,&quot;335559740&quot;:216,&quot;335559991&quot;:533}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">+</SPAN><SPAN data-contrast="none">Easy to redo when you grow or shrink the domain.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:533,&quot;335559740&quot;:216,&quot;335559991&quot;:533}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">+</SPAN><SPAN data-contrast="none">Remains static over repeated training runs, until you request re-generation.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:533,&quot;335559740&quot;:216,&quot;335559991&quot;:533}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> <TD data-celllook="69905"> <P><SPAN data-contrast="auto">+</SPAN><SPAN data-contrast="none">Fine tune to your future needs.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:533,&quot;335559740&quot;:216,&quot;335559991&quot;:533}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">+</SPAN><SPAN data-contrast="none">Take more freedom in composing your training data.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:533,&quot;335559740&quot;:216,&quot;335559991&quot;:533}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">+</SPAN><SPAN data-contrast="none">More data – better domain coverage.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:533,&quot;335559740&quot;:216,&quot;335559991&quot;:533}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">+</SPAN><SPAN data-contrast="none">Remains static over repeated training runs.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:533,&quot;335559740&quot;:216,&quot;335559991&quot;:533}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> </TD> </TR> </TBODY> </TABLE> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">When you submit documents to be used for training a custom system, the documents undergo a series of processing and filtering steps to prepare for training. These steps are explained here.&nbsp;The knowledge&nbsp;of the filtering may help you understand the sentence count displayed in custom translator as well as the steps you may take yourself to prepare the documents for training with Custom Translator.</SPAN><SPAN>&nbsp;<BR /></SPAN></P> <H3><STRONG><SPAN data-contrast="auto">Sentence alignment</SPAN></STRONG><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="auto">If your document isn't in XLIFF, TMX, or ALIGN format, Custom Translator aligns the sentences of your source and target documents to each other, sentence by sentence. Translator doesn't perform document alignment – it follows your naming of the documents to find the matching document of the other language. Within the document, Custom Translator tries to find the corresponding sentence in the other language. It uses document markup like embedded HTML tags to help with the alignment.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">If you see a large discrepancy between the number of sentences in the source and target side documents, your document may not have been parallel in the first place, or for other reasons couldn't be aligned. The document pairs with a large difference (&gt;10%) of sentences on each side warrant a second look to make sure they're indeed parallel. Custom Translator shows a warning next to the document if the sentence count differs suspiciously.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <H3><STRONG><SPAN data-contrast="auto">Deduplication&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="auto">Custom Translator removes the sentences that are present in test and tuning documents from training data. The removal happens dynamically inside of the training run, not in the data processing step. Custom Translator also removes duplicate sentences and reports the sentence count to you in the project overview before such removal.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <H3><STRONG><SPAN data-contrast="auto">Length filter&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></H3> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">Remove sentences with only one word on&nbsp;either side.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">Remove sentences with more than 100 words on either side. Chinese, Japanese, Korean are exempt.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><SPAN data-contrast="auto">Remove sentences with fewer than 3 characters. Chinese, Japanese, Korean are exempt.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"><SPAN data-contrast="auto">Remove sentences with more than 2000 characters for Chinese, Japanese, Korean.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"><SPAN data-contrast="auto">Remove sentences with less than 1% alpha characters.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="6" data-aria-level="1"><SPAN data-contrast="auto">Remove dictionary entries containing more than 50 words</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <H3><STRONG><SPAN data-contrast="auto">White space&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="auto">Replace any sequence of white-space characters including tabs and CR/LF sequences with a single space character.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Remove leading or trailing space in the sentence.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H3><STRONG><SPAN data-contrast="auto">Sentence end punctuation&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="auto">Replace multiple sentence end punctuation characters with a single instance. Japanese character normalization.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Convert full width letters and digits to half-width characters.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H3><STRONG><SPAN data-contrast="auto">Unescaped XML tags</SPAN></STRONG><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="auto">Filtering transforms unescaped tags into escaped tags:&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">&amp;lt;&nbsp;becomes &amp;amp;lt;&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">&amp;gt;&nbsp;becomes &amp;amp;gt;&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">&amp;amp;&nbsp;becomes &amp;amp;amp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H3><STRONG><SPAN data-contrast="auto">Invalid characters&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="auto">Custom Translator removes sentences that contain Unicode character U+FFFD. The character U+FFFD indicates a failed encoding conversion.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H3><STRONG><SPAN data-contrast="auto">Before uploading data</SPAN></STRONG><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P><SPAN data-contrast="auto">Remove sentences with invalid encoding.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Remove Unicode control characters.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H3><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><SPAN class="TextRun SCXW115592506 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW115592506 BCX8">Bash script example:&nbsp;</SPAN></SPAN><SPAN class="EOP SCXW115592506 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></SPAN></H3> <P>&nbsp;</P> <LI-CODE lang="bash">#!/bin/bash # This command would run recursively in the current directory to # remove Unicode control chars, e.g., &amp;#xD;, &amp;#xF0; find . -type f -exec sed -i 's/&amp;#x.*[[:alnum:]].*;//g' {} + ##Validate text replacement grep '&amp;#x.*[[:alnum:]].*;' -r * | awk -F: {'print $1'} | sort -n | uniq -c </LI-CODE> <P>&nbsp;</P> <H2 aria-level="2"><SPAN data-contrast="none">More resources:</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <UL> <LI><A href="#" target="_self"><SPAN>Microsoft Translator launching Neural Network based translations for all its speech languages</SPAN></A></LI> <LI><A href="#" target="_self"><SPAN>Extended documentation on Microsoft Docs</SPAN></A></LI> </UL> Mon, 11 Oct 2021 07:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/customize-a-translation-to-make-sense-in-a-specific-context/ba-p/2811956 hboelman 2021-10-11T07:00:00Z Case Study: Effectively Using Cognitive Search to Support Complex AI Scenarios https://gorovian.000webhostapp.com/?exam=t5/azure-ai/case-study-effectively-using-cognitive-search-to-support-complex/ba-p/2804078 <H2>Introduction</H2> <P>My team owns a platform for quickly creating, testing, and deploying machine learning models. Our platform is the backbone of a variety of AI solutions offered in Microsoft developer tools, such as the <A href="#" target="_blank" rel="noopener">Visual Studio Search</A> experience and the <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-tools/empowering-azure-cli-developers-with-ai/ba-p/1701251" target="_blank" rel="noopener">Azure CLI</A>. Our <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-tools/ai-powered-azure-tools/ba-p/2080799" target="_blank" rel="noopener">AI services</A> employ complex models that routinely handle 40 requests/s with an average response time under 40ms.</P> <P>&nbsp;</P> <P>One of the Microsoft offerings we leverage to make this happen is <A href="#" target="_blank" rel="noopener">Azure Cognitive Search</A>. Unique features offered in Cognitive Search, such as natural language processing on user input, configurable keywords, and scoring parameters, enable us to drastically improve the quality of our results. We wanted to share our two-year journey of working with Cognitive Search and all the tips and tricks we have learned along the way to provide fast response times and high-quality results without over-provisioning.</P> <P>&nbsp;</P> <P>In this article, we focus on our Visual Studio Search service to explain our learnings. The Visual Studio Search service provides real time in-tool suggestions to user queries typed in the Visual Studio search box. These queries can be in <A href="#" target="_blank" rel="noopener">14 language locales</A>, and can originate from across the globe and from many active versions of Visual Studio. Our goal is to respond to queries from all the locales and active versions in under 300ms. To respond to these queries, we store tens of thousands of small documents (the units we want to search through) for each locale in Cognitive Search. We also spread out over 5 regions globally to ensure redundancy and to decrease latency. We encountered many questions as we setup Cognitive Search and learned to best to integrate it into our service. We will walk through these questions and our solutions below.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mabooe_6-1633118267973.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314444iA6C2AA71578A643B/image-size/medium?v=v2&amp;px=400" role="button" title="mabooe_6-1633118267973.png" alt="mabooe_6-1633118267973.png" /></span><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mabooe_7-1633118267977.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314445i8DD85E6516B6C873/image-size/medium?v=v2&amp;px=400" role="button" title="mabooe_7-1633118267977.png" alt="mabooe_7-1633118267977.png" /></span></P> <P><FONT size="2"><EM>Figure 1: Visual Studio search with results not from Cognitive Search (left) and with results from Cognitive Search (right).</EM></FONT></P> <H2>&nbsp;</H2> <H2>How should we configure our Cognitive Search instance?</H2> <P>There are <FONT size="3">a </FONT>lot of service options, some of which really improved our services (even if we didn’t realize it at the start) and others that haven’t been needed. Here are the most important ones we’ve found.</P> <P>&nbsp;</P> <P>We initially planned our design around having an <A href="#" target="_blank" rel="noopener">index</A> (a Cognitive Search “table” containing our documents) per locale and per minor version of Visual Studio. With roughly four releases each year, we would have 56 new indexes (14 locales * 4 minor releases) each year. After looking at <A href="#" target="_blank" rel="noopener">pricing</A> and tier <A href="#" target="_blank" rel="noopener">limitations</A>, we realized that in less than four years, we would end up running out of indexes in all but the S3 High Storage option. We opted to redesign each index to contain the data for clients across all versions. Though, we still have one index per locale to make it easy to use language-specific <A href="#" target="_blank" rel="noopener">Skillsets</A>. This change enabled us to use the S1 tier. At the time of writing, a unit of S1 is 12.5% of the cost of an S3 unit. Across multiple instances and over the course of years, that is quite a huge savings.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mabooe_8-1633118267980.png" style="width: 639px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/314446i051A4829C6B4408B/image-dimensions/639x235?v=v2" width="639" height="235" role="button" title="mabooe_8-1633118267980.png" alt="mabooe_8-1633118267980.png" /></span></P> <P><EM><FONT size="2">Figure 2: Part of the Cognitive Search tier breakdown. Highlights different index limits.</FONT></EM></P> <P>&nbsp;</P> <P>Another factor we have adjusted over time is our Replica count. Cognitive Search supports <A href="#" target="_blank" rel="noopener">Replicas and Partitions</A>.</P> <P>&nbsp;</P> <P>A replica is essentially a copy of your index that helps load balance your requests, improving throughput and availability. Having two replicas is needed to have a <A href="#" target="_blank" rel="noopener">Service Level Agreement (SLA)</A> for read operations. We’ve settled on three replicas after trying different replica counts in our busiest region. We found three was the right balance of keeping latency and costs low for us, but you should definitely experiment with your own traffic to find the right replica count for you. Something we want to play around with in the future is scaling differently in each region, which may also reduce costs.</P> <P>&nbsp;</P> <P>Partitions, on the other hand, spread the index across multiple storage volumes, allowing you to store more data and can improve I/O for read and write. We don’t have a ton of data, our queries tend to be relatively short, and we only update our indexes roughly once every few months. So having more storage or better I/O performance isn’t as useful to us. As a result, we just have one partition.</P> <H2>&nbsp;</H2> <H2>What is causing inconsistency in search scores?</H2> <P>Internally, Cognitive Search uses <A href="#" target="_blank" rel="noopener">shards</A> to break up their index for more efficient querying. By default, the scores assigned to documents are calculated within each shard, which means faster response times. If you don’t anticipate this behavior though, it can be quire surprising, as it may mean identical requests get different results from the index. This may be more prevalent if you have a very small data set.</P> <P>&nbsp;</P> <P>Fortunately, there is a <A href="#" target="_blank" rel="noopener">feature</A> <EM>scoringStatistics=global</EM> that allows you to specify that scores should be computed across all shards. This resolved our issue and is fortunately just a minor change! In our case, we did not notice any performance impact either. Of course, your scenario may differ.</P> <H2>&nbsp;</H2> <H2>How do we reliably update and populate our indexes?</H2> <P>Cognitive Search ingests data via what are called <A href="#" target="_blank" rel="noopener">indexers</A>. They pull in data from a source, such as <A href="#" target="_blank" rel="noopener">Azure Table Storage</A>, and incrementally populate the indexes as new data appears in the source. Each of our new data dumps has ~10,000 documents per index, which needed some planning.</P> <P>&nbsp;</P> <P>There is a limit to how much concurrent running indexers can handle at a time. This means a single indexer run is not guaranteed to populate all the new documents available. indexers can be set to run on a schedule, which will mitigate this problem. For our use case, <A href="#" target="_blank" rel="noopener">executing a script</A> to automatically rerun the indexers gets our data populated faster, so we use that option.</P> <P>&nbsp;</P> <P>Normally, if a skillset changes, the index must be fully re-populated with data from the new skill. This is obviously not ideal for up-time. Fortunately, there is a preview feature called <A href="#" target="_blank" rel="noopener">Incremental Caching</A>. It uses Azure Storage to cache content to determine what can be left untouched and what needs to be updated. This makes updates much faster and allows us to operate without downtime. It also saves money, because you don’t have to rerun skills that call Cognitive Services, which is a nice bonus.</P> <H2>&nbsp;</H2> <H2>How do we maintain availability under heavy load?</H2> <P>At one point our service had a temporary outage in some of our regions caused by a spike in traffic that resulted in requests getting <A href="#" target="_blank" rel="noopener">throttled</A> by Cognitive Search. This incident was caused by an hours-long series of requests that were two orders of magnitude more than our normal peak. We had a cascade of failures that resulted in Cognitive Search throttling us, causing our service to restart and bombard Cognitive Search with requests used to check its status, causing it to throttle us more, etc. Here are the steps we took to prevent this from happening again.</P> <H3>&nbsp;</H3> <H3>Rate Limiting</H3> <P>Our service did implement Rate Limiting (via <A href="#" target="_blank" rel="noopener">this awesome open source library</A>), but the configuration threshold was set too high. We only triggered if an IP address was the source of approximately forty percent of our traffic from all regions. We have since updated this to be more reflective of our actual per IP traffic. We analyzed usage data to the highest legitimate peak traffic for our most trafficked region. We then set our throttling slightly above that rate, in order to not affect real users, but quickly slow down bad actors.</P> <H3>&nbsp;</H3> <H3>Smart Retry</H3> <P>Our service was set up to retry timed-out requests to Cognitive Search immediately. On a small scale this would be fine, but if the service was already throttled, it would only give Cognitive Search more reason to continue doing so. The official recommendation is to use an <A href="#" target="_blank" rel="noopener">Exponential Backoff Strategy</A> to space our requests out and give Cognitive Search time to recover.</P> <H3>&nbsp;</H3> <H3>Input Validation</H3> <P>We found that most queries used in the attack mentioned at the top of this section were exceptionally long. Longer queries are generally more computationally expensive and take longer for Cognitive Search to process. After analyzing real user traffic, we now ignore any input past a certain length, as it shouldn’t affect most users. We may do other things to clean up our user input in the future if we have a need.</P> <H2>&nbsp;</H2> <H2>How do we deliver faster response times?</H2> <P>Our services need to respond to users in real time as they interact with their tools, so cutting down on extra network calls and wait time is a big way to improve request latency. Fortunately, many of our queries are similar so we can leverage <A href="#" target="_blank" rel="noopener">caching</A>. We use a two-tier caching strategy to cut down response time in half. Non-cached items average ~70ms, while cached items are averaged at ~30ms. The first tier is faster but localized to that service instance. The second tier is slower but can be larger and shared among other service instances in a region.</P> <P>&nbsp;</P> <P>Our first tier is an <A href="#" target="_blank" rel="noopener">in-memory cache</A>. It is the fastest way to respond to clients. In<SPAN>-</SPAN>memory caching can get us a response in less than a millisecond. However, you must make sure your cache never grows too large, as that might slow down or crash the service. We’ve been using telemetry to decide how big caches are and correlate that with the actual memory usage to determine how to configure the size of the cache and how long items can live in it.</P> <P>&nbsp;</P> <P>We then use <A href="#" target="_blank" rel="noopener">Redis</A> as our second tier. It does require an extra network call, which increases latency. But it gives us a way to share a cache across many service instances in the same region. It means we don’t “waste” work already done by another instance. It also has more memory available than our services, which allows us to hold more information than we could do in just local memory.</P> <H2>&nbsp;</H2> <H2>How do we improve the quality of our results?</H2> <P>Cognitive Search has lots of dials to improve the quality of results, but we have opted to have a second re-ranking step after getting the initial results from Cognitive Search. This gives us more control and is something we can easily update independently of Cognitive Search.</P> <P>&nbsp;</P> <P>A key indicator that a result is valuable is if people actually use it. So, we generate a machine learning model that takes usage information into consideration. This file is stored in <A href="#" target="_blank" rel="noopener">Azure Blob Storage</A> where our service can then load in-memory. For each request, the service takes into account the user’s search context, such as if a project was opened or if Visual Studio was in debugging mode. The top results from Cognitive Search are compared to the contextual usage ranked data in our model. The scores in both result lists are normalized and re-ranked, before being returned to the user.</P> <P>&nbsp;</P> <P>Though it adds some complexity, it is a relatively straightforward process that gives us dramatically better results for our users than without it. Of course, your project may not need such a step, or you might prefer sticking to Cognitive Search tweaks to avoid that extra complexity.</P> <H2>&nbsp;</H2> <H2>Summary</H2> <P>Cognitive Search has enabled us to provide better quality results to our users. Here are the highlights of what we learned to have high up-time, fast responses, and cut down on costs.</P> <UL> <LI>Choose a service tier for the future and design around it</LI> <LI>Update replicas and partitions over time to fit your service needs</LI> <LI>Check to see if shard score optimizations cause issues with your results and disable them</LI> <LI>Plan for indexer runs with lots of documents to take time and prod them along</LI> <LI>Leverage Incremental Caching in your index</LI> <LI>Employ exponential backoff for sending request</LI> <LI>Use your own (correctly configured) rate limiting</LI> <LI>Cache what results you can at the service level</LI> <LI>Consider post processing your results</LI> <LI>You can also check out Cognitive Search’s <A href="#" target="_blank" rel="noopener">optimization docs</A></LI> </UL> <P>I hope these learnings can help you improve your projects and get the most out of Cognitive Search!</P> <H2>&nbsp;</H2> <H2>Suggested Reads</H2> <UL> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-tools/how-we-improved-engineers-productivity-in-ml-training-pipeline/ba-p/2712409" target="_blank" rel="noopener">Learn about our machine learning pipelines</A></LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-tools/ai-powered-azure-tools/ba-p/2080799" target="_blank" rel="noopener">Introduction to our service design</A></LI> <LI><A href="#" target="_blank" rel="noopener">Announcement for our enhanced Visual Studio search</A></LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-tools/empowering-azure-cli-developers-with-ai/ba-p/1701251" target="_blank" rel="noopener">Our AI Powered Azure CLI integrations</A></LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-tools/announcing-az-predictor/ba-p/1873104" target="_blank" rel="noopener">Announcement for Azure PowerShell predictions</A></LI> </UL> Fri, 01 Oct 2021 20:14:50 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/case-study-effectively-using-cognitive-search-to-support-complex/ba-p/2804078 mabooe 2021-10-01T20:14:50Z Latest updates on Azure Neural TTS: new voices for casual conversations https://gorovian.000webhostapp.com/?exam=t5/azure-ai/latest-updates-on-azure-neural-tts-new-voices-for-casual/ba-p/2761278 <P><EM>This post is co-authored with Melinda Ma, Yueying Liu, Garfield He and Sheng Zhao</EM></P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Neural Text to Speech</A>&nbsp;(Neural TTS), a powerful speech synthesis capability of Cognitive Services on Azure, enables you to convert text to lifelike speech which is <A href="#" target="_blank" rel="noopener">close to human-parity</A>.</P> <P>&nbsp;</P> <P>Since its launch, Azure Neural TTS has been widely applied to all kinds of scenarios, from voice assistants to news reading and audiobook creation, etc. and we have seen more customer requests to support natural conversations that are casual and less formal. Today we are glad to announce a few updates on Neural TTS with a focus on the new voices that are optimized for casual conversation scenarios.</P> <P>&nbsp;</P> <H2>Conversational voices: scenarios and challenges</H2> <P>&nbsp;</P> <P>More TTS voices are used to support human-machine conversations, or machine-facilitated interpersonal communications (e.g, human conversations supported with speech-to-speech translation). In these scenarios, a more relaxed and casual speaking style is usually expected. We outline three typical scenarios for conversational voices or conversational styles below.</P> <P>&nbsp;</P> <H3>Customer service bot</H3> <P>Many enterprises are using voice-enabled chatbots or IVR systems to provide more efficient customer services and transform their traditional customer care. For example, <A href="#" target="_blank" rel="noopener">Vodafone</A> successfully created a natural-sounding customer service bot, TOBi, and used the AI and natural language processing capabilities in Azure to give TOBi a clear personality that could make conversations natural and fun, which drives better customer engagement. After a customer gives their name, instead of a dry request like, “Now tell me your address,” TOBi might say, “Hey, that’s a great name. Now I’d like to know where you live.” In such scenarios, the AI voice is usually expected to sound comforting, friendly, warm, while being professional. Besides providing answers to the customer inquiries, the AI voice is also frequently used to give cheerful greetings and show empathy to customers.<BR /><BR /></P> <H3>Personal assistant</H3> <P>With the emergence of virtual assistant and virtual reality technology, we’ve seen more customers using neural TTS in supporting chit-chats and daily conversations. One challenge in making the AI-human chat more natural is for the bot to understand the chat language that usually contains special characters, modal particles like “hehe”, “haha”, “ouch”, emojis like&nbsp;<img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@0277EEB71C55CDE7DB26DB254BF2F52Bhttps://techcommunity.microsoft.com/images/emoticons/laugh_40x40.gif" alt=":lol:" title=":lol:" />&nbsp;<img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@D0941B27F467CBA2580A8C085A80A0CFhttps://techcommunity.microsoft.com/images/emoticons/happyface_40x40.gif" alt=":happyface:" title=":happyface:" /> , repeated letters like “soooo good” and provide instant responses in tones that are natural. In addition, expressing different emotions with different messages is also a high-demanded ask so the chat bot can better resonate human feelings.</P> <P>&nbsp;</P> <H3>Simultaneous speech translation</H3> <P>Speech-to-speech translation is another typical scenario where a conversational AI voice can be used. With broad coverage of over 70 languages and variances, Azure Neural TTS has been used to provide speech output for various translations. During translation, it has been challenging, however, to keep the original speaker’s styles when his/her speech is translated to another language. Especially in the casual speech scenarios, the simultaneous speaking tones often provide the subtle nuances of the speech and help the audience build emotional connections with the speaker. In such cases, an AI voice that can support simultaneous speech and capture the casual speaking styles can make the speech-to-speech translation more vivid and engaging. &nbsp;</P> <P>&nbsp;</P> <P>Next, we introduce the latest updates in Azure Neural TTS conversational voices in different languages.</P> <P>&nbsp;</P> <H2>Sara: a new chatbot voice in English (US)</H2> <P>&nbsp;</P> <P>Sara, a new conversational voice in English (US), represents a young female adult that talks more casually and fits best for the chatbot scenarios. On her day 1 release, she is built in with three emotional styles: cheerful, sad, and angry. In addition, she is capable of reading emojis and making laughter, sighs, or special angry sounds and expressing emphasis such as “soooo good”, just like a human being would.</P> <P>&nbsp;</P> <P>Check out how those sound effects are like with the examples below. &nbsp;</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD width="208"> <P><STRONG>Text input</STRONG></P> </TD> <TD width="208"> <P><STRONG>With emoji support</STRONG></P> </TD> </TR> <TR> <TD width="208"> <P data-unlink="true">That's great. I'm not working right now.&nbsp;<img class="lia-deferred-image lia-image-emoji" src="https://techcommunity.microsoft.com/html/@D0941B27F467CBA2580A8C085A80A0CFhttps://techcommunity.microsoft.com/images/emoticons/happyface_40x40.gif" alt=":happyface:" title=":happyface:" /></P> </TD> <TD width="208"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/SaraFSM_CPU24K0817_637655010744911720.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="208"> <P>Uhhh, let me ... let me think, I eat hamburger for dinner.</P> </TD> <TD width="208"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/SaraFSM_CPU24K0817_637655014041446089.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>Below is an example of Sara used in a chat bot scenario engaged in natural conversations with a human user. (This sample comes from a chitchat between the bot and the human user, and the dialogue is casual and may contain errors.)</P> <P>&nbsp;</P> <P><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Sara_Melinda.wav"></SOURCE></AUDIO></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="QinyingLiao_0-1631898725845.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/311152i1E43A6CD6AC6D4F3/image-size/medium?v=v2&amp;px=400" role="button" title="QinyingLiao_0-1631898725845.png" alt="QinyingLiao_0-1631898725845.png" /></span></P> <P>&nbsp;</P> <P>With the new Sara voice, additionally, you can adjust the speaking style using <A href="#" target="_blank" rel="noopener">SSML</A> and switch between the neutral, cheerful, sad, and angry tones.&nbsp; &nbsp;</P> <P>&nbsp;</P> <TABLE width="624px"> <TBODY> <TR> <TD width="156px"> <P><STRONG>Style</STRONG></P> </TD> <TD width="260px"> <P><STRONG>Script </STRONG></P> </TD> <TD width="208px"> <P><STRONG>TTS output</STRONG></P> </TD> </TR> <TR> <TD width="156px"> <P>Cheerful</P> </TD> <TD width="260px"> <P>I’m so happy to see you.</P> <P>&nbsp;</P> </TD> <TD width="208px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Sara Cheerful.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="156px"> <P>Sad</P> </TD> <TD width="260px"> <P>She felt disheartened when she was not chosen to be on the school team.</P> <P>&nbsp;</P> </TD> <TD width="208px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Sara Sad.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="156px"> <P>Angry</P> </TD> <TD width="260px"> <P>Jack’s father was fuming with anger when he could not find Jack in his room.</P> <P>&nbsp;</P> </TD> <TD width="208px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Sara Angry.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="156px"> <P>Chat</P> </TD> <TD width="260px"> <P>File this under missed connections cuz i'm lost</P> </TD> <TD width="208px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Sara Chat.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <H2>&nbsp;</H2> <H2>Xiaochen and Xiaoyan: new voices in Chinese optimized for spontaneous speech and customer service scenarios&nbsp;</H2> <P>&nbsp;</P> <P>Two new conversational voices are released in Chinese (Mandarin, simplified): Xiaochen, best used for creating spontaneous speech, and Xiaoyan, best used for customer service scenarios.</P> <P>&nbsp;</P> <P>These two voices are highlighted with below characteristics:</P> <P>&nbsp;</P> <UL> <LI><STRONG>A more relaxed and casual speaking style</STRONG></LI> </UL> <P>Conversational voices are different from voices for reading, broadcasting, or storytelling. In conversations, the voices are usually more relaxed, casual, and the prosody changes often. When people talk casually, the pronunciation of each word may not be complete, the sentence may not be accurate, and the control of the voice does not need to be perfect or professional. The new voices, Xiaochen and Xiaoyan, are produced to resonate this casual speaking style very well.</P> <P>&nbsp;</P> <UL> <LI><STRONG>More natural oral expressions</STRONG></LI> </UL> <P>In spontaneous speech, sentences are often short, and the structure can be simple, or even incomplete. Repetitions, disconnections, supplements, interruptions, disfluency, and redundancy are often observed in spontaneous speech. Both the Xiaochen and Xiaoyan voices deal with the speech expression in these situations well, with our advanced modeling technology. The imperfections in human expression are carefully designed and modeled so the AI voices can learn from these imperfect features, and sound more realistic.</P> <P>&nbsp;</P> <P>The following is a simulated conversation demo in a customer service scenario. In this sample, Xiaoyan acts as a customer service assistant, and Xiaochen acts as a customer. Hear how relaxed and natural Xiaochen and Xiaoyan are when talking to each other.</P> <P>&nbsp;</P> <P><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Xiaochen Xiaoyan - customer service demo.wav"></SOURCE></AUDIO></P> <TABLE width="618"> <TBODY> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>喂,你好。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>喂,你好,我刚才接到这个电话打来的,然后我想问一下是有什么包裹吗,还是什么东西。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>哦,您是要查包裹对吗?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>呃对,刚接到这个电话他说我有个包裹,但是我不确定,因为我没有寄东西。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>嗯,我这里是总机,刚刚可能是分机给您去的电话吧?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>对,然后他叫我打这个电话。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>嗯,那这样吧,麻烦您提供一下姓名,我帮您查一下。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>晓辰。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>哪个辰?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>星辰的辰,晓是那个破晓的晓。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>嗯好的,您稍等一下好吗?我刚才帮您看了一下,确实有一份由晓辰姓名签收的包裹。号码是一二三四五六七八九八七,这是您本人吗?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>&nbsp;是我本人。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>嗯,因为这个包裹当时是由于地址不详,没有办法准确投递。这样您把这个详细地址跟我讲一下,我马上安排工作人员给您送过去好吗?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>哦,我现在在出差。不过也没关系,我到时候找人帮我签收,然后写我名字就可以了,是吧?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>嗯,对的。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>寄到鼓楼大街1号吧。那能查到是谁寄的吗?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>上面没有写的。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>啊那好吧。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>哦,不过这个包裹显示是从北京寄出的。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>呃您稍等一下哈。诶,是从中关村寄出的吗?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>嗯,是的。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>啊,那我知道了。就是我可不可以报一个电话号码给你,然后叫派送的工作人员直接跟这个人联系,可以吗?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>您说的这个人是也是在原来的地址是吧?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>对,你到时候跟她联系的话,就直接送过去,拿给她就行。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>嗯,好的。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>好,谢谢你呀,那有什么问题我还是可以打这个电话对吗?</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>对的,没问题。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>行,谢谢哈,给您添麻烦了。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>嗯,不客气。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaochen</P> </TD> <TD width="549"> <P>好,那再见。</P> </TD> </TR> <TR> <TD width="69"> <P>Xiaoyan</P> </TD> <TD width="549"> <P>麻烦您对我的服务进行评价,再见。</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H2>New styles for Nanami in Japanese</H2> <P>&nbsp;</P> <P>Nanami is a popular Japanese voice. Three new styles are now available with Nanami: chat, customer service, and cheerful. These styles can be used to make your voice experience more engaging in various scenarios.</P> <P>&nbsp;</P> <TABLE width="616"> <THEAD> <TR> <TD width="102"> <P><STRONG>Voice</STRONG></P> </TD> <TD width="165"> <P><STRONG>Style</STRONG></P> </TD> <TD> <P><STRONG>Description</STRONG></P> </TD> </TR> </THEAD> <TBODY> <TR> <TD width="102"> <P>ja-JP-NanamiNeural</P> </TD> <TD width="165"> <P>style="customerservice"</P> </TD> <TD> <P>Expresses a friendly and helpful tone for customer support</P> </TD> </TR> <TR> <TD width="102">&nbsp;</TD> <TD width="165"> <P>style="chat"</P> </TD> <TD> <P>Expresses a casual and relaxed tone</P> </TD> </TR> <TR> <TD width="102">&nbsp;</TD> <TD width="165"> <P>style="cheerful"</P> </TD> <TD> <P>Expresses a positive and happy tone</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>Try the samples below:</P> <P>&nbsp;</P> <TABLE width="623"> <THEAD> <TR> <TD width="106"> <P><STRONG>Style</STRONG></P> </TD> <TD width="344"> <P><STRONG>Script</STRONG></P> </TD> <TD width="174"> <P><STRONG>TTS output</STRONG></P> </TD> </TR> </THEAD> <TBODY> <TR> <TD width="106"> <P>style="customerservice"</P> </TD> <TD width="344"> <P>注文番号もありますか?</P> <P>&nbsp;</P> </TD> <TD width="174"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Nanami_CustomerService.wav"></SOURCE></AUDIO> <P>&nbsp;</P> </TD> </TR> <TR> <TD width="106"> <P>style="chat"</P> </TD> <TD width="344"> <P>家賃はとても安いと思います。</P> <P>&nbsp;</P> </TD> <TD width="174"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Nanami_Chat.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="106"> <P>style="cheerful"</P> </TD> <TD width="344"> <P>みなさんお楽しみに!</P> <P>&nbsp;</P> <P>&nbsp;</P> </TD> <TD width="174"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Nanami_Cheerful.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H2>Updates on other languages</H2> <P>&nbsp;</P> <P>With more customers adopting Azure Neural TTS, we also collected more feedback on the pronunciation accuracy of our voices in different cases. With our latest release, 5 voices have been updated with significant improvements in the accuracy and naturalness. This can bring you better pronunciation and a more natural tone in four languages: id-ID, th-TH, da-DK, and vi-VN.</P> <P>&nbsp;</P> <P>Hear how the improvements are with the samples below.</P> <P>&nbsp;</P> <TABLE width="603"> <TBODY> <TR> <TD width="54"> <P><STRONG>Locale</STRONG></P> </TD> <TD width="60"> <P><STRONG>Voice</STRONG></P> </TD> <TD width="102"> <P><STRONG>Improvement</STRONG></P> </TD> <TD width="180"> <P><STRONG>Sample script</STRONG></P> </TD> <TD width="102"> <P><STRONG>Before</STRONG></P> </TD> <TD width="105"> <P><STRONG>After</STRONG></P> </TD> </TR> <TR> <TD width="54"> <P>id-ID</P> </TD> <TD width="60"> <P>Ardi</P> </TD> <TD width="102"> <P>Overall quality</P> </TD> <TD width="180"> <P>La lahir pada dua April seribu sembilan ratus sembilan puluh di Surakarta, Indonesia.</P> </TD> <TD width="102"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/idID Ardi old.wav"></SOURCE></AUDIO></TD> <TD width="105"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/idID Ardi new.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="54"> <P>th-TH</P> </TD> <TD width="60"> <P>Premwadee</P> </TD> <TD width="102"> <P>Overall quality</P> </TD> <TD width="180"> <P>เริ่มจ่ายเงินผ่าน ธ.ก.ส.ถึงมือชาวนาได้ตั้งแต่วันที่ 6 ธ.ค. 62 – 30 ก.ย.63</P> </TD> <TD width="102"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/thTH Premwadee old.wav"></SOURCE></AUDIO></TD> <TD width="105"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/thTH Premwadee new.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="54"> <P>da-DK</P> </TD> <TD width="60"> <P>Christel</P> </TD> <TD width="102"> <P>Overall quality</P> </TD> <TD width="180"> <P>Sagde du noget til mig?</P> </TD> <TD width="102"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/daDK Christel old.wav"></SOURCE></AUDIO></TD> <TD width="105"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/daDK Christel new.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="54"> <P>vi-VN</P> </TD> <TD width="60"> <P>HoaiMy</P> </TD> <TD width="102"> <P>Pronunciation with the Southern accent</P> </TD> <TD width="180"> <P>Năm 1990, Liên Xô tan rã.</P> </TD> <TD width="102"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/viVN HoaiMy old.wav"></SOURCE></AUDIO></TD> <TD width="105"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/viVN HoaiMy new.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="54"> <P>vi-VN</P> </TD> <TD width="60"> <P>NamMinh</P> </TD> <TD width="102"> <P>Pronunciation with the Southern accent</P> </TD> <TD width="180"> <P>Năm 1990, Liên Xô tan rã.</P> </TD> <TD width="102"> <P>&nbsp;</P> <P><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/viVN NamMinh old.wav"></SOURCE></AUDIO></P> <P>&nbsp;</P> </TD> <TD width="105"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/viVN NamMinh new.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H2>Get started</H2> <P>&nbsp;</P> <P>With these updates, we’re excited to be powering natural and intuitive voice experiences for more customers. Text to Speech offers&nbsp;<A href="#" target="_blank" rel="noopener">over 170 neural voices across over 70 languages</A>&nbsp;. In addition, the&nbsp;<A href="#" target="_blank" rel="noopener">Custom Neural Voice</A>&nbsp;capability enables organizations to create a unique brand voice in multiple languages and styles.</P> <P>&nbsp;</P> <P><STRONG>For more information:</STRONG></P> <UL> <LI>Try the <A href="#" target="_blank" rel="noopener">demo</A></LI> <LI>See our <A href="#" target="_blank" rel="noopener">documentation</A></LI> <LI><SPAN>Check out our </SPAN><A href="#" target="_blank" rel="noopener">sample code</A></LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> Fri, 24 Sep 2021 05:32:08 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/latest-updates-on-azure-neural-tts-new-voices-for-casual/ba-p/2761278 Qinying Liao 2021-09-24T05:32:08Z Announcing PyMarlin, a new PyTorch extension library for agile deep learning experimentation https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-pymarlin-a-new-pytorch-extension-library-for-agile/ba-p/2758051 <P><SPAN data-contrast="none">Bill Baer, Senior Product Manager, MSAI</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">We are excited to announce the release of&nbsp;PyMarlin! Today, AI is often bound by limitations of infrastructure, effectiveness of machine learning models, and ease of development -&nbsp;PyMarlin&nbsp;is&nbsp;a&nbsp;step closer to&nbsp;breaking through these barriers.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <H2 aria-level="2"><SPAN data-contrast="none">About&nbsp;PyMarlin</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">PyMarlin&nbsp;is a lightweight&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">PyTorch</SPAN></A><SPAN data-contrast="none">&nbsp;extension library for agile experimentation.&nbsp;PyMarlin&nbsp;was designed with the goal of simplifying the end-to-end deep learning experimentation lifecycle, agnostic of the compute environment.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">PyMarlin&nbsp;enables a way to quickly prototype a new AI scenario in development environments and effortlessly scale it to multiple processes or a multi-node <STRONG>Azure ML</STRONG> cluster with no code change needed to allow for rapid acceleration of AI innovation.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">With&nbsp;PyMarlin&nbsp;we are&nbsp;simplifying how developers and data scientists can easily use deep learning capabilities at scale for their work.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:257}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">Some of the key features you will find in&nbsp;PyMarlin&nbsp;include:</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:257}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><STRONG><SPAN data-contrast="none">Data pre-processing</SPAN></STRONG><SPAN data-contrast="none"> module which enables data preprocessing&nbsp;</SPAN><SPAN data-contrast="none">recipes</SPAN><SPAN data-contrast="none">&nbsp;to scale from single CPU to multi-CPU and multi node.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> </UL> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><STRONG><SPAN data-contrast="none">Infra-agnostic design:</SPAN></STRONG><SPAN data-contrast="none">&nbsp;native Azure ML integration implies the same code running on local dev-box can also run directly on any VM or Azure ML cluster.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><STRONG><SPAN data-contrast="none">Trainer backend abstraction</SPAN></STRONG><SPAN data-contrast="none"> with support for Single Process (CPU/GPU), distributed Data Parallel, mixed-precision (AMP, Apex) training.&nbsp;Microsoft offers&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">ORT</SPAN></A><SPAN data-contrast="none">&nbsp;and&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Deepspeed</SPAN></A><SPAN data-contrast="none">&nbsp;libraries to get the best distributed training throughputs.&nbsp;Checkout this Summarization scenario demoing this:&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">PyMarlin/ORT</SPAN></A><SPAN data-contrast="auto">. We will soon be offering ORT+DS as native trainer backend for you to use directly in your scenarios.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><SPAN data-contrast="none">Out-of-the-box </SPAN><STRONG><SPAN data-contrast="none">Plugins</SPAN></STRONG><SPAN data-contrast="none"> that can be used for typical NLP tasks like Sequence Classification, Named Entity Recognition and Seq2Seq text generation.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"><STRONG><SPAN data-contrast="none">Utility modules</SPAN></STRONG><SPAN data-contrast="none"> for model checkpointing, stats collection and&nbsp;Tensorboard&nbsp;events logging which can be customized based on your scenario.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"><STRONG><SPAN data-contrast="none">Custom arguments parser</SPAN></STRONG><SPAN data-contrast="none"> that allows for saving all the default values for arguments related to a scenario in a YAML config file, merging user supplied arguments at runtime.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> </UL> <P>&nbsp;</P> <P><SPAN data-contrast="none">In addition, all core modules are thoroughly </SPAN><STRONG><SPAN data-contrast="none">unit tested</SPAN></STRONG><SPAN data-contrast="none">. You can learn more about&nbsp;PyMarlin’s&nbsp;library core architecture here:&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Hello from&nbsp;PyMarlin</SPAN></A><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <H2 aria-level="2"><SPAN data-contrast="none">Why&nbsp;PyMarlin?</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H2> <P><SPAN data-contrast="none">PyMarlin&nbsp;was developed within Microsoft Search, Assistant, and Intelligence (MSAI)&nbsp;in collaboration with Azure Machine Learning at Microsoft.&nbsp;MSAI brings together multiple areas of research to innovate in the products that millions of people use every day. We power features in Outlook, Teams, Word, SharePoint, Bing, Windows, and others. We work in areas including machine learning, information retrieval, data mining, natural language processing and human computer interaction,&nbsp;bringing&nbsp;together research and engineering to deliver impactful user experiences.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">PyMarlin’s&nbsp;primary goal&nbsp;was to make the library code easily readable and customizable. Where&nbsp;PyMarlin&nbsp;shines is that data scientists with knowledge of only&nbsp;PyTorch&nbsp;should be able to understand the entire library code within an hour. We are incredibly excited about this release as everyone has a role to play in AI transformation, and we believe&nbsp;PyMarlin&nbsp;is just one more step in achieving that goal.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">To learn more about&nbsp;PyMarlin&nbsp;and how to use it visit&nbsp;the GitHub link here:&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">microsoft/PyMarlin</SPAN></A><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:240}">.</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">Keep up with the latest developments in AI at Scale at&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">The AI Blog.</SPAN></A><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> Fri, 17 Sep 2021 15:20:08 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-pymarlin-a-new-pytorch-extension-library-for-agile/ba-p/2758051 Bill Baer 2021-09-17T15:20:08Z Discover the possibilities with Azure Percept https://gorovian.000webhostapp.com/?exam=t5/azure-ai/discover-the-possibilities-with-azure-percept/ba-p/2733947 <P>Experimenting with technology and applying it to new scenarios expands our skills, feeds our ingenuity, and stretches the limits of what’s possible—something we’re always doing here at Microsoft. Today, the intelligent edge is transforming business across every industry. As we evolve from connected assets to connected environments and ultimately, to entire connected ecosystems, Microsoft continues to invest heavily to accelerate innovation of AI at the edge.</P> <P>&nbsp;</P> <P>I’m particularly enthralled with <A href="#" target="_blank" rel="noopener">Azure Percept</A>, an end-to-end, low-code platform that streamlines the creation of highly secure artificial intelligence (AI) solutions at the edge. It works seamlessly with Azure Cognitive Services, Azure Machine Learning, and other services. The <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-the-one-about-ai-model-possibilities/ba-p/2579617" target="_blank" rel="noopener">Azure Percept development kit includes pre-built AI models</A> that can detect people, vehicles, general objects, and even products on a shelf. It also includes workflows for edge AI models and solutions. In fact, setting up a prototype takes mere minutes.</P> <P>&nbsp;</P> <P>I recommend exploring Azure Percept to truly appreciate all the enticing possibilities. To inspire your creativity, I’ll explore some ingenious solutions that are driving value across manufacturing, retail, smart cities, smart buildings, and transportation. I’ll also share some step-by-step projects and other resources to help you get started with Azure Percept.</P> <P>&nbsp;</P> <H2>Building smarter cities</H2> <P>Cities are increasingly leveraging edge AI and IoT to improve safety and efficiency. Edge computing scenarios enabled by Azure Percept include:</P> <UL> <LI>Conserving water and energy resources through utility management</LI> <LI>Optimizing transportation planning</LI> <LI>Improving citizen security by monitoring for suspicious activity</LI> </UL> <P>To keep people safer during the pandemic and improve city operations, Kortrijik in Belgium needed <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/building-the-next-smart-city-with-azure-percept/ba-p/2547992" target="_blank" rel="noopener">insights into pedestrian movement and density</A>. The city turned to Proximus which used Azure Percept and machine learning to develop a proof of concept. Placing sensors in key locations, they were able to successfully deliver real-time vision and audio insights to promote safety and smoother operations.</P> <P>&nbsp;</P> <P>The ability to monitor street activity and traffic can inform decisions around maintenance, road conditions, parking availability, and transportation planning. Using the Azure Percept dev kit, this enterprising developer was able to <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/building-a-traffic-monitoring-ai-application-for-a-smart-city/ba-p/2596644" target="_blank" rel="noopener">build a sample traffic monitoring application</A> that could classify vehicles into categories, which could then be used to generate insights such as traffic density and vehicle type distribution over time. <A href="#" target="_blank" rel="noopener">Connecting to LTE or 5G networks</A> can also help Azure Percept enable smart city scenarios.</P> <P>&nbsp;</P> <H2>Making spaces safer and more efficient</H2> <P>The prospects for making homes, office spaces, and other facilities smarter and safer are seemingly limitless. Solutions built in Azure Percept can enable scenarios such as:</P> <UL> <LI>Detecting "tailgating” by unauthorized individuals</LI> <LI>Monitoring for access control, intrusion detection, and suspicious activities</LI> <LI>Ensuring worker safety through dangerous zone monitoring and accident detection</LI> </UL> <P>Take, for example, real estate management companies, which like most businesses are always seeking opportunities to lower costs. By default, most office spaces turn up heat (or air conditioning) in the morning and turn it down in the evening, regardless of usage. This <A href="#" target="_blank" rel="noopener">smart building business case for using Azure Percept</A> shows how you can improve building climate control by tracking space usage and weather forecasts to optimize building temperatures—and potentially lower costs by 20%.&nbsp;</P> <P>&nbsp;</P> <P>For those looking to get hands-on with implementing these types of smart spaces use cases, Microsoft AI MVP, Goran Vuksic, illustrates how the standard Azure Percept people detection model can be used to <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/detecting-room-occupancy-with-azure-percept/ba-p/2588263" target="_blank" rel="noopener">build a room occupancy detection solution</A>. Other possible smart space application tutorials include how to <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-audio-home-automation-with-azure-functions-azure/ba-p/2528048" target="_blank" rel="noopener">control a desk lamp with voice commands</A>.</P> <H2><BR />Rethinking retail</H2> <P>The possibilities for retailers are especially appealing. Solutions created in Azure Percept can be used to reduce queue waits, aid in social distancing, and keep products in stock and on shelves. Additional use cases include:</P> <UL> <LI>Understanding consumer traffic flows and shopping patterns</LI> <LI>Improving store safety and space utilization</LI> <LI>Expediting in-store pickup of online purchases through vehicle detection</LI> </UL> <P>For retailers that stock perishables, solutions built in Azure Percept can help reduce spoilage, a problem that leads to $161.6 billion in cumulative annual losses in the US.<A href="https://gorovian.000webhostapp.com/?exam=#_edn1" target="_blank" rel="noopener" name="_ednref1"><SPAN>[i]</SPAN></A> For example, video AI can <A href="#" target="_blank" rel="noopener">monitor bins of produce</A> to send alerts when restocking is required. By applying insights over time, these analytics help store personnel understand how much to order. Purchasing analytics can also be used to increase supply chain efficiency, lowering emissions from the transport of fresh food, reducing use of fertilizer, and creating a healthier planet.</P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://youtu.be/1H-U8psvxYE" align="center" size="large" width="600" height="338" uploading="false" thumbnail="http://i.ytimg.com/vi/1H-U8psvxYE/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P>For those that are getting retail items delivered to their door, it’s also possible to <A href="#" target="_blank" rel="noopener">set up your own end-to-end package delivery monitoring edge AI application</A> using Azure Percept. Discover the exact steps needed to set up an AI application that detects when a person or truck is seen by a camera, all of which is then sent to a video analytics dashboard for real-time updates so you’re never left wondering if your package has been delivered yet.</P> <P>&nbsp;</P> <H2>Making manufacturing safer and more productive</H2> <P>For manufacturers, AI at the edge solves multiple challenges to create safer, more productive, and more profitable operations. Through intelligent automation, manufacturers can capture data insights throughout production and remotely monitor plants and warehouses. Azure Percept can be used to address a range of use cases to help manufacturers:</P> <UL> <LI>Improve quality through defect detection</LI> <LI>Automate assembly processes and supply chain visibility</LI> <LI>Increase worker safety and loss prevention</LI> </UL> <P>As laid out in this <A href="#" target="_blank" rel="noopener">industrial IoT business use case</A>, AI edge solutions powered by Azure Percept enable preventative maintenance by detecting and diagnosing issues before they disrupt production. This saves both time and money. Furthermore, <A href="#" target="_blank" rel="noopener">Azure Percept can even be used to control robotic equipment</A> with voice commands to remotely exchange parts, adjust settings, and perform other service tasks.</P> <P><BR /><LI-VIDEO vid="https://youtu.be/zSBNsEqU5NA" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/zSBNsEqU5NA/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P>Within manufacturing settings, there are often multiple people, vehicles, and other objects on a factory floor. Which is why being able to use the Azure Percept dev kit to create an obstacle avoidance model can be so valuable. <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/perceptmobile-azure-percept-obstacle-avoidance-lego-car/ba-p/2352666" target="_blank" rel="noopener">Explore the steps taken to build the “Perceptmobile”</A>, including how to train the custom vision model via Azure Percept Studio and test it with the camera stream.</P> <P>&nbsp;</P> <H2>Envision the possibilities for your industry or use case</H2> <P>These are just a few of the industries where Azure Percept can drive incredible value. Scenarios in healthcare include supply chain efficiency, patient recognition, and waiting room prioritization. Edge intelligence is being used in transportation to <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/using-azure-percept-to-build-an-aircraft-part-checker/ba-p/2518329" target="_blank" rel="noopener">check and identify aircraft parts</A>. It can even be used for improving people’s lives, like <A href="#" target="_blank" rel="noopener">interpreting sign language</A>.</P> <P>&nbsp;</P> <P>Azure Percept makes creating value at the edge easier than ever. And the good news is that the <A href="https://gorovian.000webhostapp.com/?exam=t5/internet-of-things/azure-percept-dk-and-azure-percept-audio-now-available-in-more/ba-p/2712969" target="_blank" rel="noopener">Azure Percept DK and Azure Percept Audio are now available in 16 markets</A><SPAN>.</SPAN> To envision all the possibilities, I suggest that you dive in and <A href="#" target="_blank" rel="noopener">start exploring Azure Percept</A> today.</P> <P>&nbsp;</P> <P><EM><A href="https://gorovian.000webhostapp.com/?exam=#_ednref1" target="_blank" rel="noopener" name="_edn1">[i]</A> USDA, ‘Loss-Adjusted Food Availability Documentation,’ available at: <A href="#" target="_blank" rel="noopener">https://www.ers.usda.gov/data-products/food-availability-per-capita-data-system/loss-adjusted-food-availability-documentation/</A> (accessed Sept 7, 2021)</EM></P> Tue, 14 Sep 2021 02:43:41 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/discover-the-possibilities-with-azure-percept/ba-p/2733947 Amiyouss 2021-09-14T02:43:41Z 5 Ways Azure Cognitive Services Scale https://gorovian.000webhostapp.com/?exam=t5/azure-ai/5-ways-azure-cognitive-services-scale/ba-p/2664021 <P><A href="#" target="_blank" rel="noopener">Azure Cognitive Services</A>: the AI service that keeps on giving. Although a popular choice for quick development and proof-of-concepts, don’t let that override all the reasons Azure Cognitive Services are built for innovative production workloads. In this blog I want to outline 5 ways Azure Cognitive services can scale to support your mission critical solutions.</P> <P>&nbsp;</P> <P><EM>The purpose of this blog post is to share how you can access the same AI services that power enterprise products such as Teams and Xbox. For more information, or to get started using Azure AI, check out </EM><A href="#" target="_blank" rel="noopener">Artificial intelligence for developers | Microsoft Azure</A></P> <P>&nbsp;</P> <H2><SPAN>1. Scaling is dealt with for you</SPAN></H2> <P>&nbsp;</P> <P><SPAN>Azure Cognitive Services scales for you. But don’t stop reading now … there is so much more these services can do to help you scale and let me explain how they scale for you.</SPAN></P> <P>&nbsp;</P> <P><SPAN>The word scalability is often used with cloud technologies and is one of the key advantages of building solutions in the cloud. Scalability is the ability to grow your business or service in a pay-as-you-go format, meaning there are no high, up-front investments for machines, and there’s more flexibility on what you can build by choosing the types of technology you need for a specific solution.</SPAN></P> <P>&nbsp;</P> <P><SPAN>The cloud also often helps you scale through scaling features – scaling vertically, such as the size of a virtual machine for example. And scaling horizontally, such as the number of concurrent machines needed for a workload.</SPAN></P> <P><SPAN>&nbsp;</SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AmyBoyd_3-1629291465852.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304038iE2B7AF10FECD9B9D/image-size/medium?v=v2&amp;px=400" role="button" title="AmyBoyd_3-1629291465852.png" alt="AmyBoyd_3-1629291465852.png" /></span></P> <P><SPAN>So how does Azure Cognitive Services scale for you? Horizontally? Vertically? <BR />It is done in a way that almost reminds me of serverless compute: if I have an Azure function and run it 10 times or 1000 times, you still write the same code. </SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AmyBoyd_4-1629291465861.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304039iDB079BA9EB29C4C0/image-size/medium?v=v2&amp;px=400" role="button" title="AmyBoyd_4-1629291465861.png" alt="AmyBoyd_4-1629291465861.png" /></span></P> <P><SPAN>This is the same for Azure Cognitive Services. For example, if I call the Computer Vision API to describe 10 images, I can also ask it to describe 1000 images without having to change any code or options – it’s already dealt with by the service.</SPAN></P> <P><SPAN>&nbsp;</SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AmyBoyd_5-1629291465864.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304040i3A87B8023ACD0CEE/image-size/medium?v=v2&amp;px=400" role="button" title="AmyBoyd_5-1629291465864.png" alt="AmyBoyd_5-1629291465864.png" /></span></P> <P><SPAN>Find out more about scaling and serverless computing in the Azure Fundamentals learning path on Microsoft Learn</SPAN></P> <UL> <LI><A href="#" target="_blank" rel="noopener">Azure Fundamentals part 1: Describe core Azure concepts</A></LI> <LI><A href="#" target="_blank" rel="noopener">Azure Fundamentals part 2: Describe core Azure Services</A></LI> </UL> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>Have a look at this demo I created to illustrate this point:</SPAN></P> <UL> <LI><SPAN>Whether submitting one image or 200 images you do not need to make edits to infrastructure or code</SPAN></LI> <LI><SPAN>In this demo I created a Power Automate Flow that triggers when images land in Azure Blob Storage and analyses them</SPAN></LI> <LI><SPAN>Once analysed the result are sent back to Azure Blob Storage for use.</SPAN></LI> </UL> <P><SPAN><LI-VIDEO vid="https://youtu.be/BKEP4eroO4A" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/BKEP4eroO4A/hqdefault.jpg" external="url"></LI-VIDEO></SPAN></P> <P>&nbsp;</P> <H2><SPAN>2. Scaling to support growth</SPAN></H2> <P>&nbsp;</P> <P><SPAN>A use case of how the automatic Azure Cognitive services scaling can support you, if you suddenly saw an influx of usage of a feature in your application that uses cognitive services, this is great news! But you need to scale your application, scale the front and backend of your application, but Azure Cognitive Services takes care of the scale of the API, so you do not need to worry about this part of your architecture. Only pay for what you use and choose a grouped API key for vision and language services available</SPAN></P> <P>&nbsp;</P> <P><SPAN>Understand how usage of your production application relates to costs: </SPAN><A href="#" target="_blank" rel="noopener">Azure Cognitive Services pricing</A></P> <P>&nbsp;</P> <H2><SPAN>3. Scaling your pace of innovation</SPAN></H2> <P>&nbsp;</P> <P><SPAN>On to a different type of scaling. Supporting your pace of innovation.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Azure Cognitive Services models are trained by Microsoft on incredibly large datasets, and you can leverage that training and work, rather than building it yourself, therefore being able to focus on the full solution, rather than the machine learning model only. A good example to illustrate the point is speech and text translation. You need rich datasets for many different languages to create quality language and translation models from scratch. Instead using the Azure Cognitive Services, you can simply leverage these models using API calls and scale out your applications around the world faster.</SPAN></P> <P>&nbsp;</P> <P><SPAN>One story I think really illustrates this point is the work done with </SPAN><A href="#" target="_blank" rel="noopener">Vodaphone</A><SPAN>. Vodaphone are a global telecommunication company and are actively expanding their digital strategy. Vodafone used Microsoft Azure services to develop a personable digital assistant named TOBi who would help support good customer service interaction.</SPAN></P> <P>&nbsp;</P> <P><SPAN>They needed a flexible, scalable technology solution that could match their growth ambitions. It was essential that tools would be easy to use for the conversational designers in their business and built upon modular technologies to support their ambition to make TOBi the biggest chatbot in the world.</SPAN></P> <P>&nbsp;</P> <P><SPAN>They built the bot using the Microsoft Bot Framework, Language Understanding capabilities from Cognitive Services and were also able to easily connect to other non-microsoft technologies within their business easily.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Vodafone first deployed TOBi in Italy and has since rolled out language-specific versions of its bot to 15 other markets. To offer the bot in a country’s local language, Vodafone adopted Translator, part of Azure Cognitive Services, for real-time, AI-based text translation. The company currently makes TOBi available in 15 languages with even more planned.&nbsp;</SPAN><SPAN>Scaling from 1 to 15 different languages supported was accelerated by using the translator APIs. </SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AmyBoyd_6-1629293472568.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304045iB5CBE183764243F7/image-size/medium?v=v2&amp;px=400" role="button" title="AmyBoyd_6-1629293472568.png" alt="AmyBoyd_6-1629293472568.png" /></span></P> <P><SPAN>Also, the general scale of these APIs also speaks for themselves when they reported in December 2020 that TOBi now holds 25 to 30 million conversations a month with customers and handles 60 percent of their customer interactions.</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="blogimage.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304351i0F9A1A4482C2DC18/image-size/medium?v=v2&amp;px=400" role="button" title="blogimage.png" alt="blogimage.png" /></span></P> <P><SPAN>Have a look at this demo I created to illustrate this point:</SPAN></P> <UL> <LI><SPAN>Azure Cognitive Services can help you scale your innovation and solutions</SPAN></LI> <LI><SPAN>Using Translator APIs, I can convert simple text and full documents</SPAN></LI> <LI><SPAN>As well as translate into multiple languages at once using 2 HTTP requests</SPAN></LI> </UL> <P><SPAN><LI-VIDEO vid="https://youtu.be/5F0fTyuYp18" align="center" size="small" width="200" height="113" uploading="false" thumbnail="https://i.ytimg.com/vi/5F0fTyuYp18/hqdefault.jpg" external="url"></LI-VIDEO></SPAN></P> <P>&nbsp;</P> <H2><SPAN>4. Scaling you may not know your using</SPAN></H2> <P>&nbsp;</P> <P><SPAN>Speaking of scaling in production use cases, there are likely a few technologies you may be using across the Microsoft ecosystem that have Azure Cognitive Services built in. For example, <A href="#" target="_blank" rel="noopener">Microsoft Teams, Microsoft PowerPoint</A>, or even other Azure services such as <A href="#" target="_blank" rel="noopener">Metrics Advisor</A>. </SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>Part of the <A href="#" target="_blank" rel="noopener">Azure Applied AI services</A>, <A href="#" target="_blank" rel="noopener">Metrics Advisor</A> uses AI to perform data monitoring and anomaly detection in time series data. Collect your time series data, apply metrics advisor to detect anomalies. And then the most important part, act on the detections and analyze root causes to add value to your IoT project, business process or application</SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AmyBoyd_9-1629293734602.png" style="width: 758px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/304048i55BCED372E589748/image-dimensions/758x325?v=v2" width="758" height="325" role="button" title="AmyBoyd_9-1629293734602.png" alt="AmyBoyd_9-1629293734602.png" /></span></P> <P>&nbsp;</P> <P>These APIs are, by design, used for enterprise production workloads and built into our first party services for you to use.</P> <P>&nbsp;</P> <H2><SPAN>5. Scaling through design considerations</SPAN></H2> <P>&nbsp;</P> <P><SPAN>Last but certainly not least, there are a couple of design points to consider when it comes to different uses of the Azure Cognitive Services, for example scaling outside of the cloud or scaling to use Big Data.</SPAN></P> <P>&nbsp;</P> <OL> <LI> <P><STRONG>Running Cognitive Services outside the cloud.</STRONG> For some projects calling cloud APIs may not be ideal or even possible. <A style="background-color: #ffffff;" href="#" target="_blank" rel="noopener">There are 16 cognitive services within containers outside the cloud</A>.&nbsp;<SPAN style="font-family: inherit;">One of the reasons you may not want to call to the cloud is if you have a high throughput / low latency requirement then you would want to run Cognitive Services physically closer to your application logic and data. You can do this by using containers. Containers do not cap transactions per second (TPS) and can be made to scale both up and out to handle demand if you provide the necessary hardware resources</SPAN></P> </LI> <LI> <P><STRONG style="font-family: inherit;">Scaling Cognitive Services applications in Big Data</STRONG><SPAN style="font-family: inherit;">. The </SPAN><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">Azure Cognitive Services for Big Data</A><SPAN style="font-family: inherit;"> lets users channel terabytes of data through Cognitive Services using&nbsp;Apache Spark. With only a few lines of code you can integrate Cognitive Services using the PySpark API in the&nbsp;</SPAN><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">Microsoft Machine Learning Spark&nbsp;</A><SPAN style="font-family: inherit;">namespace (mmlspark.cognitive). As well as support for Scala and Java too.</SPAN></P> </LI> </OL> <P>&nbsp;</P> <P><SPAN>Have a look at this demo I created to illustrate this point:</SPAN></P> <UL> <LI>If you are using Big Data languages such as Spark, Cognitive services has a library for that mmlspark</LI> <LI>Using Azure Databricks notebooks, I was able to analyse text and images using the mmlspark library</LI> </UL> <P><LI-VIDEO vid="https://youtu.be/7YE8COdqSWY" align="center" size="small" width="200" height="113" uploading="false" thumbnail="http://i.ytimg.com/vi/7YE8COdqSWY/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <H2>What's Next?</H2> <P>&nbsp;</P> <P>In this blog post we covered<STRONG> 5 ways Azure Cognitive Services can help you scale your production workloads,</STRONG> from automatically scaling for you, to supporting your innovation pace as well as solutions for edge and big data uses cases.</P> <P>&nbsp;</P> <UL> <LI>For more learnings, information and insight on Azure Cognitive Services go to the Artificial Intelligence for developers web page: <A href="#" target="_blank" rel="noopener">Artificial intelligence for developers | Microsoft Azure</A></LI> <LI>Find out more about Azure Cognitive Services and how they can be used in your mission critical workloads take some time to review the documentation and samples: <A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener">Azure Cognitive Services documentation | Microsoft Docs</A></LI> </UL> <P>&nbsp;</P> Mon, 23 Aug 2021 09:37:37 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/5-ways-azure-cognitive-services-scale/ba-p/2664021 AmyBoyd 2021-08-23T09:37:37Z Semantic Search in Action https://gorovian.000webhostapp.com/?exam=t5/azure-ai/semantic-search-in-action/ba-p/2672304 <P class="lia-align-justify">&nbsp;</P> <P>Azure Cognitive search has included a new feature called <STRONG>semantic search.</STRONG> Customers have put &nbsp;this feature to action, so early in May, 2021, Ogilvy a subsidiary of WPP incorporated semantic&nbsp; search in &nbsp;their &nbsp;Enterprise Knowledge Management &nbsp;system called Starfish. The project is based around &nbsp;a content Discovery portal which should be the first point of contact for users and a key component in Ogilvy’s rich ecosystem. It uses cognitive search which provides intelligent document insights and recommendation son RFI, RFP’s and case studies, leading to faster and efficient response to new business requests.</P> <P>A client typically ask a series of questions starting with inquiring about Ogilvy as a company, the capabilities and its accomplishments, similar works&nbsp; for a peer company, and fees. On the Starfish Portal they would ask the &nbsp;following</P> <UL> <LI>When Ogilvy receives an RFI, it will include some basic questions about Ogilvy <UL> <LI>Where&nbsp;are&nbsp;Ogilvy's&nbsp;headquarters?</LI> <LI>What&nbsp;are&nbsp;Ogilvy's&nbsp;core&nbsp;competencies?</LI> <LI>Who&nbsp;are&nbsp;Ogilvy's&nbsp;biggest&nbsp;customers?</LI> </UL> </LI> <LI>RFI’s will also include deeper questions about Ogilvy’s experience and how they think/work <UL> <LI>Give an example of Ogilvy’s work &lt;- good answer</LI> </UL> </LI> <LI>Ogilvy may also want to reference past customers scenarios to show how they solved problems in the past <UL> <LI>What&nbsp;was&nbsp;Ogilvy's&nbsp;campaign&nbsp;for&nbsp;Fanta?</LI> <LI>When was Fanta discovered?</LI> </UL> </LI> </UL> <P>&nbsp;</P> <P>Without Semantic search&nbsp; query terms are analyzed&nbsp; via similarity algorithms, using a term frequency that count the number of times a term appears in a document or within a document corpus.&nbsp; A&nbsp; probability is applied and estimates if this is relevant. Intent is lacking&nbsp; in most web experience.</P> <P>Overall Sematic search has significantly advanced the quality of search results:</P> <P><STRONG>Technology benefits: &nbsp;</STRONG></P> <OL> <LI>Intelligent&nbsp; Ranking &nbsp;- uses a &nbsp;semantic ranking model , so search is based on the &nbsp;context and intent , it is elevating matches that make more sense given the &nbsp;relevance of the content in the results.</LI> <LI>Better Query Understanding – it is &nbsp;based on meaning&nbsp; and not just the syntax of the word unlike others technologies &nbsp;that will use&nbsp; &nbsp;text frequency. WHO sent a message ( World Health Org) vs Who is the&nbsp; father…?</LI> <LI>Semantic answers – &nbsp;It improves the quality of search results in two ways. First, the ranking &nbsp;of documents that are semantically closer to the intent of original query is a significant benefit. Second, results are more immediately consumable when captions, and potentially answers, are present on the page. At all times, the engine is working with existing content. Language models used in semantic search are designed to extract an intact string that looks like an answer but won't try to compose a new string as an answer to a query, or as a caption for a matching document.&nbsp;</LI> </OL> <P>We use Deep neural nets in Bing that understand the nuance of the language &nbsp;and trained on different models of the language – how words are related in various context and dimensions.</P> <P>&nbsp;</P> <P>Figure 1.</P> <P>Json Query</P> <P>{</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"search":&nbsp;"When&nbsp;was&nbsp;Fanta&nbsp;Orange&nbsp;discovered",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"queryType":&nbsp;"semantic",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"queryLanguage":&nbsp;"en-us",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"speller":&nbsp;"lexicon",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"answers":&nbsp;"extractive|count-3",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"searchFields":&nbsp;"content,metadata_storage_name",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"count":&nbsp;<STRONG>true</STRONG></P> <P>}</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Response : Note the caption in the answer</P> <P>{</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"@odata.context":&nbsp;"<A href="#"ogilvy-poc-index')/$metadata#docs(*" target="_blank" rel="noopener">https://ci-acs.search.windows.net/indexes('ogilvy-poc-index')/$metadata#docs(*</A>)",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"@odata.count":&nbsp;2115,</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"@search.answers":&nbsp;[</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"key":&nbsp;"79b0fe8e-0648-4cc5-bd5c-eaf0e2027855",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"text":&nbsp;"First&nbsp;launched&nbsp;Fanta&nbsp;began&nbsp;Fanta&nbsp;U.S.&nbsp;in&nbsp;U.S.&nbsp;phasing&nbsp;out&nbsp;in&nbsp;U.S.&nbsp;relaunch&nbsp;<STRONG>1940&nbsp;1941&nbsp;1959&nbsp;1987&nbsp;2002&nbsp;2005</STRONG>&nbsp;First&nbsp;launched&nbsp;Minute&nbsp;Maid&nbsp;in&nbsp;Germany&nbsp;launched&nbsp;in&nbsp;U.S.&nbsp;As&nbsp;beverage&nbsp;choice&nbsp;has&nbsp;exploded&nbsp;in&nbsp;recent&nbsp;years,&nbsp;carbonated&nbsp;soft&nbsp;drinks&nbsp;(CSDs)&nbsp;have&nbsp;faced&nbsp;stiff&nbsp;competition.",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"highlights":&nbsp;<STRONG>null</STRONG>,</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"score":&nbsp;0.8339705</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}</P> <P>Versus the same &nbsp;query without Semantic Search :</P> <P>{</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"search":&nbsp;"When&nbsp;was&nbsp;Fanta&nbsp;discovered",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"queryType":&nbsp;"full",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"queryLanguage":&nbsp;"en-us",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"speller":&nbsp;"lexicon",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"count":&nbsp;<STRONG>true</STRONG></P> <P>}</P> <P>{</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"@odata.context":&nbsp;"<A href="#"ogilvy-poc-index')/$metadata#docs(*" target="_blank" rel="noopener">https://ci-acs.search.windows.net/indexes('ogilvy-poc-index')/$metadata#docs(*</A>)",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"@odata.count":&nbsp;3253,</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;"@search.nextPageParameters":&nbsp;{</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"search":&nbsp;"When&nbsp;was&nbsp;Fanta&nbsp;discovered",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"queryType":&nbsp;"full",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"queryLanguage":&nbsp;"en-us",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"speller":&nbsp;"lexicon",</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"count":&nbsp;<STRONG>true</STRONG>,</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"skip":&nbsp;50</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;},</P> <P>&nbsp;</P> <P>Response &nbsp;has several hits&nbsp; but not close</P> <P>&nbsp;</P> <P>"value":&nbsp;[</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"@search.score":&nbsp;42.056797,</P> <P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"content":&nbsp;"\n_rels/.rels\n\n\ndocProps/core.xml\n\n\ndocProps/app.xml\n\n\nppt/presentation.xml\n\n\nppt/_rels/presentation.xml.rels\n\n\nppt/presProps.xml\n\n\nppt/viewProps.xml\n\n\nppt/commentAuthors.xml\n\n\nppt/slideMasters/slideMaster1.xml\nTitle&nbsp;TextBody&nbsp;Level&nbsp;OneBody&nbsp;Level&nbsp;TwoBody&nbsp;Level&nbsp;ThreeBody&nbsp;Level&nbsp;FourBody&nbsp;Level&nbsp;Five\n\n\nppt/slideMasters/_rels/slideMaster1.xml.rels\n\n\nppt/theme/theme1.xml\n\n\nppt/slideLayouts/slideLayout1.xml\nTitle&nbsp;TextBody&nbsp;Level&nbsp;OneBody&nbsp;Level&nbsp;TwoBody&nbsp;Level&nbsp;ThreeBody&nbsp;Level&nbsp;FourBody&nbsp;Level&nbsp;Five\n\n\nppt/slideLayouts/_rels/slideLayout1.xml.rels\n\n\nppt/slideLayouts/slideLayout2.xml\nTitle&nbsp;TextBody&nbsp;Level&nbsp;OneBody&nbsp;Level&nbsp;TwoBody&nbsp;Level&nbsp;</P> <P>&nbsp;</P> <P><STRONG>Technology Background:</STRONG></P> <P>&nbsp;Semantic search adds&nbsp; a semantic ranking model; and second, it returns captions and answers in the response<STRONG>.</STRONG></P> <P><EM>Semantic ranking</EM>&nbsp;looks for context and relatedness among terms, elevating matches that make more sense given the query<EM>. Language understanding</EM> finds summarizations or&nbsp;<EM>captions</EM>&nbsp;and&nbsp;<EM>answers</EM>&nbsp;within your content and includes them in the response, which can then be rendered on a search results page for a more productive search experience.</P> <P>State-of-the-art pretrained models are used for summarization and ranking. To maintain the fast performance that users expect from search, semantic summarization and ranking are applied to just the top 50 results, as scored by the&nbsp;<A href="#" target="_blank" rel="noopener">default similarity scoring algorithm</A> ( BM25) . Using those results as the document corpus, semantic ranking re-scores those results based on the semantic strength of the match.. Scores are calculated based on the degree of linguistic similarity between query terms and matching terms in the index</P> <P>The underlying technology is from Bing and Microsoft Research, and integrated into the Cognitive Search infrastructure as an add-on feature.</P> <P>In the preparation step, the document corpus returned from the initial result set is analyzed at the sentence and paragraph level to find passages that summarize each document. In contrast with keyword search, this step uses machine reading and comprehension to evaluate the content. Through this stage of content processing, a semantic query returns&nbsp;<A href="#" target="_blank" rel="noopener">captions</A>&nbsp;and&nbsp;<A href="#" target="_blank" rel="noopener">answers</A>. To formulate them, semantic search uses language representation to extract and highlight key passages that best summarize a result. If the search query is a question - and answers are requested - the response will also include a text passage that best answers the question, as expressed by the search query.</P> <P>For both captions and answers, existing text is used in the formulation. The semantic models do not compose new sentences or phrases from the available content, nor does it apply logic to arrive at new conclusions. In short, the system will never return content that doesn't already exist.</P> <P>Results are then re-scored based on the&nbsp;<A href="#" target="_blank" rel="noopener">conceptual similarity</A>&nbsp;of query terms.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>Key Success Measurements for Ogilvy</STRONG></P> <UL> <LI>40% improvement in RFP/RFI response time.</LI> <LI>Content growth per month</LI> <LI>RFP Generator clicks</LI> <LI>Content downloads</LI> <LI>User Adoption and Collaboration</LI> <LI>Quality Content Searches</LI> </UL> <P>&nbsp;</P> <P><STRONG>Business Outcomes:</STRONG></P> <P>The biggest business impact will be to have a significant increase in win rate for RFI's which lead to a higher revenue, this was achieved by the portals ability to identify best answers to the RFI and layouts without having to perform multiple searches, saving time and resources. Being able to use routine methods, filters and cognitive function to refine the search results would eliminate redundancy by almost 40%, reducing the costs of the process, and enhance customer experience and satisfaction.</P> Sun, 22 Aug 2021 02:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/semantic-search-in-action/ba-p/2672304 Sonia Ang 2021-08-22T02:00:00Z Azure Health Bot expands its template catalog to amplify the patient voice through PRO collection https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-health-bot-expands-its-template-catalog-to-amplify-the/ba-p/2628098 <P>In recent years, there has been an increasing focus <SPAN>to place</SPAN> patients at the center of healthcare decisions, improve safety, enhance experience and maximize the value of the provided care. Self-reporting of daily functioning and health outcomes from the patient<SPAN>, </SPAN>rather than caregiver perspectives<SPAN>, </SPAN>are widely used to inform and guide patient- and quality-centered care, clinical decision-making, research, and health policies. These measures have been shown to facilitate patient-provider communication and enhance treatment adherence. Moreover, if developed using standardized procedures, they can be used as clinical trial endpoints. Such usage is broadly encouraged by regulatory agencies and is informative for “downstream” decision-makers in the pathway to market, like insurers and health care systems. However, traditional Patient Reported Outcomes (PRO) collection instruments, such as paper-pencil surveys and phone interviews, make the process lengthy, cumbersome, and labor-intensive.</P> <P>&nbsp;</P> <P>Azure Health Bot offers a natural approach to enable real-time electronic data collection directly from patients, that would be engaging, would amplify patient voice and accelerate outcomes. As we recently shared in <A href="#" target="_blank" rel="noopener">this blog</A>, during the last year of the pandemic, Azure Health Bot has been at the leading edge of helping organizations engage with patients at scale. During that time, the service delivered more than 2 billion messages to over 150 million people worldwide, spanning 25 countries.</P> <P>&nbsp;</P> <P>Today we announce the Patient Reported Outcomes category that is added to the Azure Health Bot Template Catalog, to facilitate self-reporting of patient daily functioning and health outcomes metrics. The templates are built to support interoperability through FHIR questionnaires. This will allow healthcare organizations, such as care providers and clinical trial sponsors, to instantly put together Health Bot scenarios for collecting responses directly from patients, incorporating outcomes into data flows and aggregating them towards further analysis in the context of clinical trials, informed quality-centered care and beyond.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ksenyakveler_1-1628526607089.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/301826iF0C30F9E54CAA526/image-size/large?v=v2&amp;px=999" role="button" title="ksenyakveler_1-1628526607089.png" alt="ksenyakveler_1-1628526607089.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Starting with two widely known, generic health-related quality of life questionnaires, <A href="#" target="_blank" rel="noopener">RAND 36-Item Health Survey (SF-36)</A> and <A href="#" target="_blank" rel="noopener">CDC HRQOL–14 "Healthy Days Measure"</A>, as well as already available <A href="#" target="_blank" rel="noopener">PHQ-SADS mental health screener</A>, we plan on expanding those later on to additional generic and condition-specific instruments.&nbsp;Both RAND SF-36 and HRQOL-14 templates, currently available in English, are fully localized and ready for translation.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="blog_sf36_combined.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/301825i7A9FA00B8AD85EA7/image-size/large?v=v2&amp;px=999" role="button" title="blog_sf36_combined.png" alt="blog_sf36_combined.png" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="blog_sf36_combined_2.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/301870i313E6255034A6E89/image-size/large?v=v2&amp;px=999" role="button" title="blog_sf36_combined_2.png" alt="blog_sf36_combined_2.png" /></span></P> <P>&nbsp;</P> <P>Get started today:</P> <P>Explore&nbsp;<A href="#" target="_blank" rel="noopener">Azure Health Bot</A></P> <P><SPAN>Learn more about</SPAN><SPAN>&nbsp;</SPAN><SPAN><A href="#" target="_blank" rel="noopener">Microsoft Cloud for Healthcare</A></SPAN></P> <P>Review <A href="#" target="_blank" rel="noopener">Scenario Template Catalog</A></P> <P>&nbsp;</P> Mon, 09 Aug 2021 18:07:05 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-health-bot-expands-its-template-catalog-to-amplify-the/ba-p/2628098 ksenyakveler 2021-08-09T18:07:05Z Summarize text with Text Analytics API https://gorovian.000webhostapp.com/?exam=t5/azure-ai/summarize-text-with-text-analytics-api/ba-p/2601243 <H4><STRONG>Text Analytics extractive summarization is in public preview!</STRONG></H4> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ylxiong_0-1628032473948.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300307iF77E8A33AC732F0E/image-size/medium?v=v2&amp;px=400" role="button" title="ylxiong_0-1628032473948.png" alt="ylxiong_0-1628032473948.png" /></span></P> <P><SPAN>The extractive summarization feature in Text Analytics uses natural language processing techniques to locate key sentences in an unstructured text document. These sentences collectively convey the main idea of the document. This feature is provided as an API for developers. They can use it to build intelligent solutions based on the relevant information extracted to support various use cases.</SPAN></P> <P>&nbsp;</P> <P>In the public preview, extractive summarization supports 10 languages. &nbsp;It&nbsp;is based on pretrained multilingual transformer models, part of our quest for&nbsp;<A href="#" target="_blank" rel="noopener">holistic representations</A>.&nbsp; It draws its strength from transfer learning across monolingual and harness the shared nature of languages&nbsp;to produce models of improved quality and efficiency. The 10 languages are English, Spanish, French, Italian, German, Chinese (Simplified), Japanese, Korean,&nbsp;&nbsp;<SPAN style="font-family: inherit;">Portuguese (Portugal), and Portuguese (Brazilian).&nbsp;</SPAN></P> <P><SPAN><A title="extsum doc" href="#" target="_self">Learn more about Text Analytics extractive summarization</A></SPAN></P> <P>&nbsp;</P> <P><U>References</U>:</P> <P><A href="#" target="_blank" rel="noopener noreferrer">Quickstart&nbsp;</A>offers an easy way to get started with any of the Text Analytics offerings.</P> <P>Text Analytics v3.1 <A title="v3.1 GA" href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-ga-of-text-analytics-for-health-opinion-mining-pii/ba-p/2600747" target="_self">GA Announcement</A></P> <P>&nbsp;</P> <P>&nbsp;</P> Mon, 09 Aug 2021 17:27:33 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/summarize-text-with-text-analytics-api/ba-p/2601243 ylxiong 2021-08-09T17:27:33Z Announcing GA of Text Analytics for health, Opinion Mining, PII and Analyze https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-ga-of-text-analytics-for-health-opinion-mining-pii/ba-p/2600747 <H2>Text Analytics v3.1 is in GA!</H2> <P>&nbsp;</P> <P>It has been a year since we released (in GA) our last TA API (v3.0). After five previews of adding features, responsible AI, incorporating customer feedback, UX feedback, and optimizations; in July 2021 we announced GA (General Availability) of Text Analytics v3.1. With this release, <STRONG>starting July 2021 customers can use Text Analytics for health, Opinion Mining, PII</STRONG><STRONG>&nbsp;and Analyze</STRONG><STRONG>&nbsp;as GA offerings</STRONG>.</P> <P>&nbsp;</P> <H3>GA of Text Analytics for health</H3> <P>&nbsp;</P> <P>Text Analytics for health is a feature of the Text Analytics API service that extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records. Millions of Text Records were processed during the preview in the last year. (1 text record = 1000 characters). Here is how the output from Text Analytics for health can be used to identify entities, extract and link relations and detect assertions.&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="health-relation-extraction.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300019iF0E30086B36233AF/image-size/large?v=v2&amp;px=999" role="button" title="health-relation-extraction.png" alt="health-relation-extraction.png" /></span></P> <P>There are two ways to utilize this service:</P> <UL> <LI><A href="#" target="_blank" rel="noopener">The web-based API (asynchronous)</A></LI> <LI><A href="#" target="_blank" rel="noopener">A Docker container (synchronous)</A></LI> </UL> <P>A blog about the announcement is published <SPAN><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/text-analytics-for-health-now-generally-available-unlocks/ba-p/2531955" target="_blank" rel="noopener">here</A></SPAN>.</P> <P>&nbsp;</P> <H3>GA of Opinion Mining</H3> <P>&nbsp;</P> <P>Opinion Mining is a feature of Sentiment Analysis. Also known as Aspect-Based Sentiment Analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to words (such as the attributes of products or services) in text. More than a&nbsp;<STRONG>100 customer</STRONG>&nbsp;have tried the previews and continue to use it across the globe. Here is how the output of Opinion mining can be used to deep dive in to actionable sentiments.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="opinion-mining.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/300020iBDF740378304BA7D/image-size/large?v=v2&amp;px=999" role="button" title="opinion-mining.png" alt="opinion-mining.png" /></span></P> <H3>&nbsp;</H3> <P>More details are available <A href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <H3>GA of PII (Personally Identifiable Information and Redaction)</H3> <P>&nbsp;</P> <P>The PII feature is part of Named Entity Recognition (NER), and it can identify and redact sensitive entities in text that are associated with an individual person such as: phone numbers, email addresses, mailing addresses, passport numbers. <STRONG>100s of&nbsp;customers</STRONG> have tried the previews and continue to use it across the globe. More details are available <A href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <H3>GA of Analyze (async endpoint)</H3> <P>&nbsp;</P> <P>The&nbsp;/analyze&nbsp;endpoint for Text Analytics allows you to analyze the same set of text documents with multiple text analytics features in one API call. Previously, to use multiple features you would need to make separate API calls for each operation. Consider this capability when you need to analyze large sets of documents with more than one Text Analytics feature. More details are available <A href="#" target="_blank" rel="noopener">here</A>.</P> <P>&nbsp;</P> <P>All the above offerings along with a demo are also highlighted in the AI Show:</P> <OL> <LI><A href="#" target="_blank" rel="noopener">What's new in Text Analytics for Health - YouTube</A></LI> <LI><A href="#" target="_blank" rel="noopener">What’s New in Text Analytics: Opinion Mining and Async API - YouTube</A></LI> </OL> <P>&nbsp;</P> <P><U>References</U>:</P> <P>You can check the <A href="#" target="_blank" rel="noopener">Quickstart</A>, which offers an easy way to get started with any of the Text Analytics offerings.</P> <P>You can check the Pricing details <A href="#" target="_self">here</A>.</P> <P>&nbsp;</P> <P>&nbsp;</P> Mon, 02 Aug 2021 23:33:23 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-ga-of-text-analytics-for-health-opinion-mining-pii/ba-p/2600747 Parashar 2021-08-02T23:33:23Z Introducing the latest technology advancement in Azure Neural TTS: Uni-TTSv3 https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-the-latest-technology-advancement-in-azure-neural/ba-p/2595922 <P><EM>This post is co-authored with Bohan Li, Yanqing Liu, Xu Tan and Sheng Zhao.</EM></P> <P>&nbsp;</P> <P>At the //Build conference in 2021, Microsoft announced <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981" target="_blank" rel="noopener">a few key updates to Neural TTS</A>, a Speech capability of Cognitive Services. These updates include a multilingual voice (JennyMultilingualNeural) that can speak 14 languages, and a new preview feature in Custom Neural Voice that allows customers to create a brand voice that speaks different languages.</P> <P>&nbsp;</P> <P>In this blog, we introduce the technology advancement behind these feature updates: <STRONG>Uni-TTSv3.</STRONG></P> <P>&nbsp;</P> <H1>Recent challenges in Neural TTS</H1> <P>&nbsp;</P> <P>Neural TTS converts text into lifelike speech. The more natural the voice is, the more convincing the AI becomes. With the <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener">Uni-TTS </A>model introduced in July 2020, we were able to produce near-human-parity voices with high performance. Later in November 2020, with the new LR-Uni-TTS model, we managed to expand neural TTS to more locales quickly including those low-resource languages. Applying the same Uni-TTS technology to the <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-a-natural-custom-voice-for-your-brand/ba-p/2112777" target="_blank" rel="noopener">Custom Neural Voice</A> feature made it possible for customers including BBC, Progressive, Swisscom, AT&amp;T and more to further create a natural brand voice for their business.</P> <P>&nbsp;</P> <P>With a growing number of customers adopting neural TTS and Custom Neural Voice for more scenarios, we saw new challenges that urge us to bring the technology to the next level. &nbsp;</P> <P>&nbsp;</P> <UL> <LI><STRONG>A single voice that can speak multiple languages.</STRONG> Now more than ever,&nbsp;developers are expected to build voice-enabled applications that can reach a global audience. For example, NPCs in virtual games can talk to users with the same voice in different languages. Customer service bots can switch languages and respond to their user inquiries in different markets seamlessly.</LI> <LI><STRONG>A more reliable and efficient platform that can run on different voice data including those from customers.</STRONG> To support more customization scenarios, neural TTS must be robust enough to serve different customer data as customers may not record their voice samples in the ideal environment or provide consistent and clean audios as training data.</LI> </UL> <P>&nbsp;</P> <P><STRONG>Uni-TTSv3</STRONG>, the next generation of neural TTS, was released to address these challenges and empower these features.</P> <P>&nbsp;</P> <H1>Uni-TTSv3</H1> <P>&nbsp;</P> <P>In previous blogs, we introduced three key components of neural TTS. <STRONG>Neural text analysis</STRONG> converts text to pronunciations. <STRONG>Neural acoustic model</STRONG> predicts acoustic features like the mel <A href="#" target="_blank" rel="noopener">spectrogram</A> from text. <STRONG>Neural vocoder</STRONG> converts acoustic features into wave samples.</P> <P>&nbsp;</P> <P>Acoustic features represent the content, speaker, and prosody information of the utterances.</P> <P>Speaker identity usually represents the brand of the voice, so it is important to keep. Prosody like tone, break or emphasis impacts the naturalness of synthetic speech. Neural acoustic models, like Microsoft <A href="#" target="_blank" rel="noopener">Transformer TTS</A> and <A href="#" target="_blank" rel="noopener">FastSpeech </A>models, can predict acoustic features much better by learning the recording data than traditional acoustic models. Thus, it can generate better prosody and speaker similarity.</P> <P>&nbsp;</P> <P>The previous versions (v2) of <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener">Uni-TTS </A>are a teacher-student based model. It requires 3 stages in training: training a teacher model, fine-tuning the teacher, and training student models. This is complex and costly, particularly for Custom Neural Voice which allows customers to create a voice model through complete self-service.</P> <P>&nbsp;</P> <P>To scale the model training to support a larger number of voices, recently we have upgraded the neural acoustic model into <STRONG>Uni-TTSv3</STRONG> which is more robust and cost effective.</P> <P>&nbsp;</P> <H2>How Uni-TTSv3 works</H2> <P>&nbsp;</P> <P>Below diagram describes the training pipeline of Uni-TTSv3.&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="QinyingLiao_0-1627628613819.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/299502i0568F8B55C6F5424/image-size/large?v=v2&amp;px=999" role="button" title="QinyingLiao_0-1627628613819.png" alt="Training pipeline of Uni-TTSv3" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Training pipeline of Uni-TTSv3</span></span></P> <P>&nbsp;</P> <P>Uni-TTSv3 model is trained from 3,000 hours of human recording data covering multiple speakers and multiple locales. This well-trained model can speak multiple languages with different speaker identities and serve as the Uni-TTSv3 <STRONG>base model</STRONG> (or, source model). This base model is then fine-tuned to create a TTS model for a target speaker with the target speaker data. In the <STRONG>finetuning</STRONG> stage, a <A href="#" target="_blank" rel="noopener"><STRONG>denoising</STRONG> module</A> is used to eliminate the effect of noises in the recording so the output model quality is improved from the data.</P> <P>&nbsp;</P> <P>Uni-TTSv3 models are based on <A href="#" target="_blank" rel="noopener">FastSpeech 2</A> with additional enhancements. Below diagram describes the model structure:</P> <DIV id="tinyMceEditorQinyingLiao_1" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="QinyingLiao_9-1627629027646.png" style="width: 585px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/299503iD74FA3BFB14D2C15/image-size/large?v=v2&amp;px=999" role="button" title="QinyingLiao_9-1627629027646.png" alt="UniTTSv3 model structure" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">UniTTSv3 model structure</span></span></P> <P>&nbsp;</P> <P>Uni-TTSv3 model is a non-autoregressive text-to-speech model and is directly trained from recording, which does not need a teacher-student training process.</P> <P>&nbsp;</P> <P>The <STRONG>encoder</STRONG> module takes phoneme, the unit of sound that can distinguish one word&nbsp;from another in a particular language, as input and outputs representation of the input text. <STRONG>Duration and pitch predictors</STRONG> predict the total time taken by each phoneme (a.k.a the phoneme duration), and the relative highness or lowness of a tone as perceived by humans (a.k.a the pitch). The phoneme duration and pitch are extracted by an external tool during the training stage, and in the synthesis stage, they are predicted by the predictor modules. After pitch and duration prediction, the encoder outputs are expanded according to the duration and fed into the <STRONG>mel-spectrogram decoder</STRONG>. The decoder then outputs the mel-spectrogram - the final acoustic features.</P> <P>&nbsp;</P> <P>In addition, Uni-TTSv3 model also has a phoneme-level <STRONG>fine-grained style embedding</STRONG>, which can help boost the naturalness of synthesized speech. This fine-grained style module extracts fine-grained prosody from the mel-spectrogram in the training stage and is predicted by the encoder during speech synthesis. To train this model on multi-speaker multi-lingual data, <STRONG>speaker id and locale id</STRONG> are added to control synthesized speech timbre and accent. Finally, the encoder output, speaker information, locale information and fine-grained style are fed into the decoder, to generate acoustic features.</P> <P><STRONG>&nbsp;</STRONG></P> <H2>Benefits of Uni-TTSv3</H2> <P>&nbsp;</P> <P>Uni-TTSv3 model has empowered the Azure Text-to-Speech platform and Custom Neural Voice to support multi-lingual voices. With Uni-TTSv3, Custom Neural Voice training pipeline is also upgraded to allows customers to create a high-quality voice model with less training time.</P> <P>&nbsp;</P> <H4>Less training time for a new voice</H4> <P>Uni-TTS v3 simplified the training process compared to the teacher-student training. It reduces training time significantly to around 50% on acoustic training parts. With this improvement, Custom Neural Voice now allows customers to create a custom voice model in a much shorter time, and thus helps customers to save significantly on voice training costs.</P> <P>&nbsp;</P> <H4>More robust to handle customer training data at different quality levels</H4> <P>Extensive tests on Uni-TTSv3 in more than 40 languages demonstrated its capability in achieving better or at least the same voice quality compared to the teach-student model (Uni-TTSv2), while the training time is reduced. The whole training process is stable based on the phoneme alignments. Bad cases are also reduced, including the skipping or repeating issues as seen in some attention-based models.</P> <P>&nbsp;</P> <P>In addition, the denoising module helps to remove the typical noise patterns in the training data while keeping the voice fidelity. This makes the Custom Neural Voice platform more robust in handling different customer data.</P> <P>&nbsp;</P> <TABLE> <TBODY> <TR> <TD width="208"> <P><STRONG>Language</STRONG></P> </TD> <TD width="176"> <P><STRONG>Uni-TTSv2</STRONG></P> <P>(<EM>Average compute hours of training: <STRONG>50</STRONG>)</EM></P> </TD> <TD width="192"> <P><STRONG>Uni-TTSv3</STRONG></P> <P><EM>(Average compute hours of training: <STRONG>30</STRONG>)</EM></P> </TD> </TR> <TR> <TD width="208"> <P>English (US)</P> </TD> <TD width="176"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.Jenny_unittsV2.wav"></SOURCE></AUDIO></TD> <TD width="192"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.Jenny_unittsV3.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="208"> <P>Spanish</P> </TD> <TD width="176"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.Dalia_unittsV2.wav"></SOURCE></AUDIO></TD> <TD width="192"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.Dalia_unittsV3.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="208"> <P>Chinese (Mandarin, Simplified)</P> </TD> <TD width="176"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.Yunyang_unittsV2.wav"></SOURCE></AUDIO></TD> <TD width="192"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/TTS-Wave.Yunyang_unittsV3.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H4>More support for cross-lingual and multi-lingual voices</H4> <P>Trained on multi-lingual and multi-speaker datasets, Uni-TTSv3 can enable a voice to speak multiple languages even without training data from the same human speaker in those target languages.</P> <P>&nbsp;</P> <P>With Uni-TTSv3, a powerful multilingual voice, JennyMultilingualNeural, is released to the Azure TTS platform and enables developers to keep their AI voice consistent while serving different locales. <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981" target="_blank" rel="noopener">Check the samples here.</A> The JennyMultilingualNeural voice is as far as we know the first real-time production voice in the industry that can speak multiple languages naturally with the same timbre. The average <A href="#" target="_blank" rel="noopener">MOS &nbsp;score</A> of Jenny in all languages supported is above 4.2 on a scale of 5.</P> <P>&nbsp;</P> <P>Uni-TTSv3 has also been integrated into Custom Neural Voice to power cross-lingual features. It allows customers to create a voice that speaks different languages by providing speech samples collected in just one language as training data. This helps customers to save effort and cost in creating voices to support different markets, as usually it requires selecting a new voice talent and doing new recordings through a lengthy process for each language respectively.</P> <P>&nbsp;</P> <H1>Get started</H1> <P>&nbsp;</P> <P>Whether you are building a voice-enabled chatbot or IoT device, an IVR solution, adding read-aloud features to your app, converting e-books to audiobooks, or even adding Speech to a translation app, you can make all these experiences natural sounding and fun with Neural TTS.<BR /><BR /></P> <P>Let us know how you are using or plan to use Neural TTS voices in this&nbsp;<A href="#" target="_blank" rel="noopener">form</A>. You can also open issues <A href="#" target="_blank" rel="noopener">here </A>for speech SDK issues.</P> <P>&nbsp;</P> <P>We look forward to hearing about your experience and developing more compelling services together with you for developers around the world.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Add voice to your app in 15 minutes</A></P> <P><A href="#" target="_blank" rel="noopener">Explore the available voices in this demo</A></P> <P><A href="#" target="_blank" rel="noopener">Build your custom voice</A></P> <P><A href="#" target="_blank" rel="noopener">Build a voice-enabled bot</A></P> <P><A href="#" target="_blank" rel="noopener">Deploy Azure TTS voices on prem with Speech Containers</A></P> <P><A href="#" target="_blank" rel="noopener">Learn more about other Speech scenarios</A></P> <H1>&nbsp;</H1> Fri, 30 Jul 2021 07:39:35 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-the-latest-technology-advancement-in-azure-neural/ba-p/2595922 Qinying Liao 2021-07-30T07:39:35Z Flexible Deployment https://gorovian.000webhostapp.com/?exam=t5/azure-ai/flexible-deployment/ba-p/2564524 <P>In a <A href="https://gorovian.000webhostapp.com/?exam=t5/blogs/blogworkflowpage/blog-id/AzureAIBlog/article-id/160?WT.mc_id=aiml-34112-heboelma" target="_self">previous a blog</A> I wrote about Cognitive Services in containers, and we dove in an example on how to deploy a Speech Container.</P> <P>&nbsp;</P> <P>In this blog we are going to dive into the different way you can run Cognitive Services in a container on Azure. We first look at the different options on how you can run a container on Azure and secondly, we investigate a few scenarios how you might want to run your application.</P> <P>&nbsp;</P> <P><STRONG>What are containers?</STRONG><BR /><EM>Containerization is an approach to software distribution in which an application or service, including its dependencies &amp; configuration, is packaged together as a container image. With little or no modification, a container image can be deployed on a container host. Containers are isolated from each other and the underlying operating system, with a smaller footprint than a virtual machine. Containers can be instantiated from container images for short-term tasks and removed when no longer needed.</EM><BR /><A title="Read more on Microsoft Docs" href="#" target="_blank" rel="noopener">Read more on Microsoft Docs</A></P> <P>&nbsp;</P> <H2>Running Containers on Azure</H2> <P>First let us start find out the options you can choose from, when hosting a container solution on Azure. We will look at 4 different options. For every option there is a code sample using the Azure CLI.</P> <P>&nbsp;</P> <H3><SPAN>Create a Text Analytics endpoint in Azure</SPAN></H3> <P><SPAN>To follow along with the code samples you need to create a Text Analytics endpoint in Azure first.</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="bash"># Create a resource group called demo_rg in the region westeurope az group create --name demo_rg --location westeurope # Create a Text Analytics Cognitive Service Endpoint az cognitiveservices account create \ --name textanalytics-resource \ --resource-group demo_rg \ --kind TextAnalytics \ --sku F0 \ --location westeurope \ # Get the endpoint URL az cognitiveservices account show --name textanalytics-resource --resource-group demo_rg --query properties.endpoint -o json # Get the API Keys az cognitiveservices account keys list --name textanalytics-resource --resource-group demo_rg </LI-CODE> <P>&nbsp;</P> <H3>Use the container directly or create a reusable container</H3> <P>You can run the containers directly from the Microsoft Container Repository or create a reusable container and store it in your own Container repository.</P> <P><BR />Sample on how to run the Text Analytics Container locally from the Microsoft Container Registry.</P> <P>&nbsp;</P> <LI-CODE lang="bash">``` # Run the container docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 \ mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment \ Eula=accept \ Billing={ENDPOINT_URI} \ ApiKey={API_KEY} # Test the container curl -X POST "http://localhost:5000/text/analytics/v3.0/sentiment" -H "Content-Type: applicationhttps://techcommunity.microsoft.com/json" -d "{\"documents\":[{\"language\":\"en\",\"id\":\"1-en\",\"text\":\"Hello beautiful world.\"}]}" ``` </LI-CODE> <P>&nbsp;</P> <H3><SPAN>Reuse a container.</SPAN></H3> <P><SPAN>You can use a container recipes to create Cognitive Services Containers that can be reused. Containers can be built with some or all configuration settings so that they are not needed when the container is started.</SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <P>Once you have this new layer of container (with settings), and you have tested it locally, you can store the container in a container registry. When the container starts, it will only need those settings that are not currently stored in the container. The private registry container provides configuration space for you to pass those settings in.</P> <P>&nbsp;</P> <LI-CODE lang="bash"># Create a Dockerfile FROM mcr.microsoft.com/azure-cognitive-services/sentiment:latest ENV billing={BILLING_ENDPOINT} ENV apikey={ENDPOINT_KEY} ENV EULA=accept # Build the docker image docker build -t &lt;your-image-name&gt; . # Run your container docker run --rm -it -p 5000:5000 --memory 8g --cpus 1 &lt;your-image-name&gt; # Test the container curl -X POST "http://localhost:5000/text/analytics/v3.0/sentiment" -H "Content-Type: applicationhttps://techcommunity.microsoft.com/json" -d "{\"documents\":[{\"language\":\"en\",\"id\":\"1-en\",\"text\":\"Hello beautiful world.\"}]}"</LI-CODE> <P>&nbsp;</P> <P>Now that you have your own container you can push the container to your own Container Repository.</P> <P>&nbsp;</P> <LI-CODE lang="bash"># Create the container registry az acr create --resource-group demo_rg --name henksregistry --sku Basic --admin-enabled true # Get the username az acr credential show -n henksregistry --query username # Get the password az acr credential show -n henksregistry --query passwords[0].value # Get the repository URL az acr show -n henksregistry --query loginServer # Sign in to your private registry with the details from the steps above docker login henksregistry.azurecr.io # Tag your container docker tag &lt;your-image-name&gt; henksregistry.azurecr.io/&lt;your-image-name&gt; # Push your container to the registry docker push henksregistry.azurecr.io/&lt;your-image-name&gt; </LI-CODE> <P>&nbsp;</P> <P data-unlink="true"><SPAN>Now you have a Text Analytics Container in your own private Azure Container Registry.<BR /><BR /><STRONG>Read more on Docs</STRONG><BR />-&nbsp;<A title="Install and run Text Analytics containers" href="#" target="_blank" rel="noopener">Install and run Text Analytics containers</A> &nbsp;<BR />-&nbsp;<A title="Create containers for reuse&nbsp;" href="#" target="_blank" rel="noopener">Create containers for reuse&nbsp;</A><BR />-&nbsp;<A title="Create a private container registry using the Azure CLI" href="#" target="_blank" rel="noopener">Create a private container registry using the Azure CLI</A></SPAN></P> <P><SPAN><BR />Let us go to the next step and see how we can deploy these containers in the different offerings and see the benefits.</SPAN></P> <P>&nbsp;</P> <H3><SPAN>Azure Container Instance (ACI)</SPAN></H3> <P><SPAN>The easiest option to run a container is to use an Azure Container instance also referred as the serverless container option. It can run Linux and Windows containers. If you use Linux based containers you can run multiple containers in a container group, mount volumes from Azure Files, monitored them with Azure Monitor and use a GPU.</SPAN></P> <P><SPAN>Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs. You can run them securely in a virtual network and use a Traffic Manager to distribute traffic across multiple instances.<BR /><BR /><STRONG>Read more about Azure Container Instances</STRONG><BR />-&nbsp;<A title="Azure Container Instances documentation" href="#" target="_blank" rel="noopener">Azure Container Instances documentation</A><BR />-&nbsp;<A title="Azure Container Instances (ACI) across 3 regions in under 30 seconds with Azure Traffic Manager by&nbsp;Aaron Wislang" href="#" target="_blank" rel="noopener">Azure Container Instances (ACI) across 3 regions in under 30 seconds with Azure Traffic Manager by&nbsp;Aaron Wislang</A><BR />-&nbsp;<A title="What is Traffic Manager?" href="#" target="_blank" rel="noopener">What is Traffic Manager?</A></SPAN></P> <P>&nbsp;</P> <P><SPAN>This sample shows how you can deploy the sentiment analysis container to an Azure Container Instance.</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="bash">az container create --resource-group &lt;insert resource group name&gt; \ --name &lt;insert container name&gt; \ --dns-name-label &lt;insert unique name&gt; \ --memory 2 --cpu 1 \ --ports 5000 \ --image mcr.microsoft.com/azure-cognitive-services/textanalytics/sentiment:3.0-en \ --environment-variables \ Eula=accept \ Billing=&lt;insert endpoint&gt; \ ApiKey=&lt;insert apikey&gt; </LI-CODE> <P>&nbsp;</P> <H3><SPAN>Azure App Service</SPAN></H3> <P><SPAN>Azure App Service is a great way to host a simple container solution, it offers some more out of the box functionality then Azure Container Instances. All the features from an App Service like Azure <A title="AutoScale" href="#" target="_blank" rel="noopener">AutoScale</A> are available and easily integrate with services like Azure Front Door and traffic manager to perform traffic routing and load-balancing.</SPAN></P> <P>&nbsp;</P> <P><SPAN>App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management, staging environments, custom domain, and TLS/SSL certificates.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Azure App Service for your containerized solutions is great if you are looking for easy to use service that automatically scales when traffic increases and integrates with other Azure Services for traffic management like Traffic Manager and Azure Frontdoor.<EM><BR /></EM></SPAN></P> <P>&nbsp;</P> <P><SPAN><STRONG>Read more about Azure App Service on Microsoft Docs and Azure.com.</STRONG><BR />- <A title="App Service documentation" href="#" target="_blank" rel="noopener">App Service documentation</A>&nbsp;<BR />-&nbsp;<A title="Web App for Containers" href="#" target="_blank" rel="noopener">Web App for Containers</A><EM><BR /><BR /></EM></SPAN></P> <P><SPAN>This sample shows how you can deploy the sentiment analysis container to an Azure App Service</SPAN></P> <P>&nbsp;</P> <LI-CODE lang="bash"># Create an App Service az appservice plan create --name &lt;app-name&gt; --resource-group demo_rg --sku S1 --is-linux # Create the webapp with the private container (Deployment can take a while) az webapp create --name &lt;webapp-name&gt; --plan &lt;app-name&gt; --resource-group demo_rg -i &lt;registryname&gt;.azurecr.io/&lt;your-image-name&gt; curl -X POST "http://&lt;webapp-name&gt;.azurewebsites.net/text/analytics/v3.0/sentiment" -H "Content-Type: applicationhttps://techcommunity.microsoft.com/json" -d "{\"documents\":[{\"language\":\"en\",\"id\":\"1-en\",\"text\":\"Hello beautiful world.\"}]}" </LI-CODE> <P>&nbsp;</P> <P><SPAN>The options above are the quickest way to get started with containers on Azure. But some solutions require advanced orchestration or need to be able to run on edge devices in disconnected scenarios. For this scenarios Azure offers Azure Kubernetes Services and IoT Edge.<BR /></SPAN></P> <P>&nbsp;</P> <H3><SPAN>Azure Kubernetes Services </SPAN></H3> <P><SPAN>Deploy and manage containerized applications more easily with a fully managed Kubernetes service. Azure Kubernetes Service (AKS) offers serverless Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience, and enterprise-grade security and governance. Unite your development and operations teams on a single platform to rapidly build, deliver, and scale applications with confidence.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Kubernetes is a rapidly evolving platform that manages container-based applications and their associated networking and storage components. Kubernetes focuses on the application workloads, not the underlying infrastructure components. Kubernetes provides a declarative approach to deployments, backed by a robust set of APIs for management operations.</SPAN></P> <P>&nbsp;</P> <P><SPAN>You can build and run modern, portable, microservices-based applications, using Kubernetes to orchestrate and manage the availability of the application components. Kubernetes supports both stateless and stateful applications as teams progress through the adoption of microservices-based applications.</SPAN></P> <P>&nbsp;</P> <P><SPAN>As an open platform, Kubernetes allows you to build your applications with your preferred programming language, OS, libraries, or messaging bus. Existing continuous integration and continuous delivery (CI/CD) tools can integrate with Kubernetes to schedule and deploy releases.</SPAN></P> <P>&nbsp;</P> <P><SPAN>AKS provides a managed Kubernetes service that reduces the complexity of deployment and core management tasks, like upgrade coordination. The Azure platform manages the AKS control plane, and you only pay for the AKS nodes that run your applications.</SPAN></P> <P>&nbsp;</P> <P><A title="Read more on Microsoft Docs" href="#" target="_self"><SPAN>Read more on Microsoft Docs</SPAN></A></P> <P>&nbsp;</P> <H3><SPAN>IoT Edge</SPAN></H3> <P><SPAN>Azure IoT Edge is a fully managed service built on Azure IoT Hub. Deploy your cloud workloads—artificial intelligence, Azure and third-party services, or your own business logic—to run on Internet of Things (IoT) edge devices via standard containers. By moving certain workloads to the edge of the network, your devices spend less time communicating with the cloud, react more quickly to local changes, and operate reliably even in extended offline periods.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Azure IoT Edge moves cloud analytics and custom business logic to devices so that your organization can focus on business insights instead of data management. Scale out your IoT solution by packaging your business logic into standard containers, then you can deploy those containers to any of your devices and monitor it all from the cloud.</SPAN></P> <P>&nbsp;</P> <P><SPAN>Analytics drives business value in IoT solutions, but not all analytics needs to be in the cloud. If you want to respond to emergencies as quickly as possible, you can run anomaly detection workloads at the edge. If you want to reduce bandwidth costs and avoid transferring terabytes of raw data, you can clean and aggregate the data locally then only send the insights to the cloud for analysis.</SPAN></P> <P>&nbsp;</P> <P><SPAN>If you want to learn more about IoT Edge you can watch the video "<A title="IoT ELP Module 3 Adding Intelligence, Unlocking New Insights with AI &amp; ML" href="#" target="_blank" rel="noopener">IoT ELP Module 3 Adding Intelligence, Unlocking New Insights with AI &amp; ML</A>"&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN><LI-VIDEO vid="https://www.youtube.com/watch?v=doLvCmHSKaM" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/doLvCmHSKaM/hqdefault.jpg" external="url"></LI-VIDEO></SPAN></P> <P>&nbsp;</P> <P><SPAN>There are also other services on Azure that can run containers, but might need some work to get the Cognitive Services containers running on them:<BR />- <A title="Azure Batch" href="#" target="_self">Azure Batch</A><BR />- <A title="Azure Function App" href="#" target="_blank" rel="noopener">Azure Function App</A><BR />- <A title="Azure Service Fabric" href="#" target="_blank" rel="noopener">Azure Service Fabric</A><BR /><BR /></SPAN></P> <P><A title="Continue your learning journey and&nbsp;get skilled up on all things Azure AI!" href="#" target="_blank" rel="noopener"><SPAN>Continue your learning journey and&nbsp;get skilled up on all things Azure AI!</SPAN></A></P> Tue, 20 Jul 2021 18:07:42 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/flexible-deployment/ba-p/2564524 hboelman 2021-07-20T18:07:42Z Protect your organization’s growth by using Azure Metrics Advisor https://gorovian.000webhostapp.com/?exam=t5/azure-ai/protect-your-organization-s-growth-by-using-azure-metrics/ba-p/2564682 <P data-unlink="true">As more and more companies are embracing digital transformation to understand the performance of their organizations, services, and equipment, huge amounts of time series data are collected. Making use of that data effectively to help with the decision-making processes is a key problem to be solved. <A href="#" target="_blank" rel="noopener">Azure&nbsp;Metrics Advisor</A>, which is built on top of the <A href="#" target="_blank" rel="noopener">Anomaly Detector API</A><SPAN>,</SPAN> is designed to solve that.</P> <P>&nbsp;</P> <P>We are excited to announce the <STRONG>GA release</STRONG> of Azure Metrics Advisor! As of now, the service is available in all <A href="#" target="_blank" rel="noopener">Azure hero regions</A>. The GA version of Azure Metrics Advisor offers several key updates that we’ll dive into below.</P> <P>&nbsp;</P> <H2><FONT size="5"><STRONG>What’s new</STRONG></FONT></H2> <P>&nbsp;</P> <P><FONT size="4"><STRONG>Enhanced metric onboarding experience with robust verification and step-by-step guidance</STRONG></FONT></P> <P class="xxmsonormal"><SPAN style="color: black;">By introducing this upgrade, we’ve improved guidance on how to write a valid query and get the metric data prepared as the expected schema, we’ve also optimized the onboarding flow and enhanced the verification process. This could help distinguish issues on different stages and show an intuitive error message to the customer to resolve accordingly. Also, a&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">step-by-step tutorial</A><SPAN style="color: black;"> is added to the document of Metrics Advisor, which elaborates the things to be noticed and several sample queries.</SPAN></P> <P class="xxmsonormal">&nbsp;</P> <P class="xxmsonormal"><SPAN style="color: black;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="onboarding-trim.gif" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297007i026C91D6DF8BB2D7/image-size/large?v=v2&amp;px=999" role="button" title="onboarding-trim.gif" alt="onboarding-trim.gif" /></span></SPAN></P> <H3>&nbsp;</H3> <P><FONT size="4"><STRONG>Simplified diagnostic flow to focusing on finding root cause on current incident</STRONG></FONT></P> <P class="xxmsonormal"><SPAN style="color: black;">Root cause analysis capability is one of the key differentiators of Metrics Advisor. At the preview stage, a bunch of features was offered to help customers perform the analysis manually. </SPAN></P> <P class="xxmsonormal"><SPAN style="color: black;">&nbsp;</SPAN></P> <P class="xxmsonormal"><SPAN style="color: black;">Within this new version, </SPAN><A href="#" target="_blank" rel="noopener">the diagnostic experience</A><SPAN style="color: black;"> has been upgraded to be more automatic. And diagnostic flow is converged into two major steps: one is analyzing root cause into specific dimensions using the diagnostic tree, then to find related anomalies across metrics within the pre-configured “</SPAN><A href="#" target="_blank" rel="noopener">Metrics graph</A><SPAN style="color: black;">”. This upgrade consolidates a standard diagnostic flow that we learned from the latest customer engagement and best practices.</SPAN></P> <P class="xxmsonormal">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image.png" style="width: 586px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297014i13BB7BC6B90EBC79/image-size/large?v=v2&amp;px=999" role="button" title="image.png" alt="image.png" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image.png" style="width: 603px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297013i9BADCD347E5D2377/image-size/large?v=v2&amp;px=999" role="button" title="image.png" alt="image.png" /></span></P> <P>&nbsp;</P> <P><FONT size="4"><STRONG>New notification channel through Microsoft Teams</STRONG></FONT></P> <P class="xxmsonormal"><SPAN style="color: black;">Microsoft Teams has been adopted as the major tool to communicate with colleagues and also collaborate on daily work by many organizations. It’s a meaningful experience to get Metrics Advisor to hook up with Microsoft Teams to provide near real-time notification when there’s an anomaly detected. </SPAN></P> <P class="xxmsonormal"><SPAN style="color: black;">&nbsp;</SPAN></P> <P class="xxmsonormal"><SPAN style="color: black;">It is been enabled through a newly introduced “</SPAN><A href="#" target="_blank" rel="noopener">Teams hook</A><SPAN style="color: black;">” within Metrics Advisor. The experience is quite simple by using an “<STRONG>Incoming webhook</STRONG>” connector in Teams connector, then get the generated URL configured in “Teams hook” within Metrics Advisor. Finally, subscribe to the anomalies that you’re interested in, then a notification will be available to the Teams channel.</SPAN></P> <P class="xxmsonormal">&nbsp;</P> <P><FONT size="4"><STRONG>New data source supported</STRONG></FONT></P> <P class="xxmsonormal"><SPAN style="color: black;">In GA version, two new data sources are supported: <STRONG>Azure Log Analytics</STRONG> and <STRONG>Azure Event Hubs</STRONG>, which enables customers on AIOps and IoT monitoring scenario to onboard their metric data easily and directly.</SPAN></P> <P class="xxmsonormal"><SPAN style="color: black;">&nbsp;</SPAN></P> <P class="xxmsonormal"><SPAN style="color: black;">Azure Log Analytics has consolidated data from different sources such as platform logs from Azure services, log and performance data from virtual machines agents, and usage and performance data from applications into a single workspace. By onboarding the metrics to Metrics Advisor, those operation metrics could be monitored effectively and eventually gain a better service performance. </SPAN></P> <P class="xxmsonormal"><SPAN style="color: black;">&nbsp;</SPAN></P> <P class="xxmsonormal"><SPAN style="color: black;">Azure Event Hubs is widely used in IoT scenarios and good at real-time data ingestion. By supporting this source, the smallest granularity of metric data is supported at 60 seconds.</SPAN></P> <P class="xxmsonormal">&nbsp;</P> <H2><FONT size="5"><STRONG>Customer endorsement</STRONG></FONT></H2> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="image.png" style="width: 624px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/297012i6B4E242025B3C889/image-size/large?v=v2&amp;px=999" role="button" title="image.png" alt="image.png" /></span></P> <P>&nbsp;</P> <P>Samsung wants to keep customers informed and entertained, so the company works to make sure that its popular Smart TV service stays up and running. To do this, it tracks a wide range of metrics from numerous data sources. Samsung used a rule-based system for anomaly detection, but this solution was limited by the complex environment, and it generated many false positives. The company deployed Microsoft Azure Metrics Advisor in China, and now it can perform root cause analysis within minutes of an alert being triggered. The move to proactive metric monitoring and diagnosis with Metrics Advisor helps staff devote more time to intervention and, ultimately, to customer happiness. Read more at <A href="#" target="_blank" rel="noopener">Microsoft Customer Story-Samsung resolves anomalies faster with Azure Metrics Advisor, keeps Smart TV service humming</A>.</P> <P>&nbsp;</P> <H2><FONT size="5"><STRONG>Get started today</STRONG></FONT></H2> <P>Go to Azure portal to create your new Metrics Advisor resource at <A href="#" target="_blank" rel="noopener">here</A>. Read <A href="#" target="_blank" rel="noopener">Metrics Advisor document</A> to learn more about the service capability and the scenario that can be applied.</P> <P>&nbsp;</P> <H2><FONT size="5"><STRONG>For more information</STRONG></FONT></H2> <P>Metrics Advisor GA Build announcement:</P> <UL> <LI><A href="#" target="_blank" rel="noopener">https://azure.microsoft.com/en-us/blog/harness-the-power-of-data-and-ai-in-your-applications-with-azure/</A></LI> <LI><A href="#" target="_blank" rel="noopener">https://www.crn.com/slide-shows/cloud/15-big-azure-announcements-made-at-microsoft-build-2021/2</A></LI> <LI><A href="#" target="_blank" rel="noopener">https://www.octaviantg.com/blog/azure-applied-ai-services</A></LI> <LI><A href="#" target="_blank" rel="noopener">https://blogs.microsoft.com/ai/azure-applied-ai-services-accelerate-ai-solution-development-to-help-businesses-soar/</A></LI> </UL> Wed, 21 Jul 2021 01:42:02 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/protect-your-organization-s-growth-by-using-azure-metrics/ba-p/2564682 Qiyang 2021-07-21T01:42:02Z Microsoft named a Leader in 2021 Forrester's Cognitive Search Wave https://gorovian.000webhostapp.com/?exam=t5/azure-ai/microsoft-named-a-leader-in-2021-forrester-s-cognitive-search/ba-p/2564185 <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mernanashed_0-1626732262256.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/296916i92B00C9FE2D66072/image-size/large?v=v2&amp;px=999" role="button" title="mernanashed_0-1626732262256.png" alt="mernanashed_0-1626732262256.png" /></span></P> <P>&nbsp;</P> <P>Forrester Research recently released their <A href="#" target="_self">Wave report for 2021 Cognitive Search</A>.&nbsp;<A href="#" target="_blank" rel="noopener">Microsoft</A> is named a leader in this Wave. You can download a complimentary copy of the&nbsp;<A href="#" target="_blank" rel="noopener">Forrester Wave™: &nbsp;Cognitive Search, Q3 2021</A> for the full report.</P> <P>&nbsp;</P> <P>In its report, Forrester stated that cognitive search customers should look for providers that offer precision of search results, tuning tools to improve precision of search results, and limitless efficient scaling to maximize performance and optimize compute resources. Forrester named Microsoft a leader in this evaluation and recognized for our scalability as well as our, “sophisticated underlying AI-powered natural language processing.” &nbsp;</P> <P>&nbsp;</P> <P>Let’s take a closer look at the capabilities that <A href="#" target="_blank" rel="noopener">Azure Cognitive Search</A> features:</P> <P>&nbsp;</P> <OL> <LI>Effortlessly Implemented – With Azure Cognitive Search, you can deploy a search index quickly and avoid operational maintenance. The fully managed service reduces the complexity of data ingestion and index creation by integrating with popular Azure storage solution, databases, and other data stores, while offering indexing functionality exposed through a simple REST API or .NET SDK. You can quickly deploy a fully-configured search service with intuitive user experiences like scoring, faceting, suggestions, synonyms, and geo-search – all while avoiding the operational overhead needed to debug index corruption, monitor service availability, or manually scale during traffic fluctuations.<BR /><BR /></LI> <LI>Easily Integrated – Azure Cognitive Search has <A href="#" target="_blank" rel="noopener">built-in AI capabilities</A>, including optical character recognition (OCR), key phrase extraction, and named entity recognition to unlock insights. These built-in AI capabilities, extensible from several&nbsp;<A href="#" target="_blank" rel="noopener">Azure Cognitive Services</A>, help extract insights ranging from sentiment analysis, video analysis, and translation to name a few. &nbsp;Azure Cognitive Search has connected many of these pre-built Cognitive Services from Microsoft, and made them available within this service. <BR /><BR /></LI> <LI>Readily Customizable – Azure Cognitive Search enables optimization of search experiences by integrating your own data to create custom models from Azure Machine Learning and classifiers that feed into Cognitive Search. With these capabilities, Azure Cognitive Search is extensible and customizable to fit industry-specific classifiers and skills. <BR /><BR /></LI> <LI>Most Relevant Results – Relevance is the most fundamental capability for any search scenario, encompassing the ability to understand user intent from a search query and deliver the most relevant results in an appropriately raked order. Azure Cognitive Search offers <A href="#" target="_blank" rel="noopener">semantic search</A> capabilities powered by deep learning models that understand user intent to surface and rank the most relevant search results.</LI> </OL> <P>We feel Azure Cognitive Search is the best environment for any organization that is building out core search solutions, including catalog search and in-app search, as well as for <A href="#" target="_blank" rel="noopener">knowledge mining</A> solutions, including digital asset management, technical content research, and more. Be it via the Cognitive Search portal, APIs, or accompanying SDK, Azure Cognitive Search makes search capabilities omnipresent across developers and offers the best way to perform secure, managed, and scalable search.</P> <P>&nbsp;</P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="100%" height="30px"> <P><STRONG>“For developers, Azure Cognitive Search’s sweet spot is for applications that can benefit from the scalability of Microsoft Azure and continued advances in Microsoft’s world-class AI technology.”</STRONG></P> <P class="lia-align-right"><EM>-The Forrester Wave<SUP>TM</SUP>: Cognitive Search, Q3 2021</EM></P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>Microsoft’s mission is <STRONG>to empower every person and every organization on the planet to achieve more.</STRONG> With Azure Cognitive Search, we’re trying to accomplish this for any developer looking to add intelligent search into their applications.</P> Tue, 20 Jul 2021 17:54:25 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/microsoft-named-a-leader-in-2021-forrester-s-cognitive-search/ba-p/2564185 mernanashed 2021-07-20T17:54:25Z Learn how to add AI to your mission-critical apps in four weeks https://gorovian.000webhostapp.com/?exam=t5/azure-ai/learn-how-to-add-ai-to-your-mission-critical-apps-in-four-weeks/ba-p/2543846 <P>Calling all developers, data scientists, and machine learning engineers! Microsoft uses industry leading AI models across the organization in numerous solutions, and we make those <EM>same</EM> AI capabilities available to you to use in your <EM>own</EM> applications.</P> <P>&nbsp;</P> <P>Azure AI provides you access to the same AI services that power enterprise products such as Teams and Xbox, and that are built to meet global scale and security requirements.</P> <H1>&nbsp;</H1> <H1>Announcing new AI &amp; ML content&nbsp; for Developers and Data Scientists</H1> <P>&nbsp;</P> <P>Today we’re excited to announce additional content on the resource pages on Azure.com, with a rich new set of content for <A href="#" target="_self">data scientists</A> and <A href="#" target="_blank" rel="noopener">developers</A>. With the additional content, you can continue growing your skills with a curated learning journey to skill up on Azure AI and Machine learning in 30 days! We’ll also dive deeper into what it means to have access to the same AI technology that powers Microsoft, including:</P> <UL> <LI>How to scale your applications seamlessly</LI> <LI>How to run Azure AI services anywhere via containers for flexible deployment, and</LI> <LI>How to ensure your applications are secure and compliant for your business needs</LI> <LI>How to train and deploy models at scale</LI> <LI>What is MLOps and why is it important</LI> <LI>Understanding hybrid &amp; multi-cloud machine learning and securing ML environments</LI> </UL> <P>You’ll also have access to:</P> <UL> <LI>Learn how Microsoft’s Flight Simulator developers and UCLA Health’s data scientists are using Azure AI to develop mission-critical, scalable AI and ML solutions.</LI> <LI>A curated learning journey to take you from zero to hero in 30 days. Each learning journey has videos, tutorials, and hands-on exercises to help prepare you to pass a Microsoft certification in just 4 weeks. Upon completing the learning journey, you’ll be eligible to receive 50% off a Microsoft Certification exam.</LI> <LI>Engage with our engineering teams and stay up to date with the latest innovations on our <A href="#" target="_blank" rel="noopener">AI Tech Community</A>, where you’ll find blogs, discussion forums, and more.</LI> </UL> <P>Whether you’re new to AI and ML, or already have a foundation and are looking to get further skilling, the videos, tutorials, and other content on these pages will help you on your journey. We’re excited to see how you leverage the power of Azure AI and apply these breakthrough-AI technologies to your own solutions!</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="mernanashed_1-1626195836274.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295552iBACA83C52981DFD0/image-size/large?v=v2&amp;px=999" role="button" title="mernanashed_1-1626195836274.png" alt="mernanashed_1-1626195836274.png" /></span></P> <P><EM>Pictured above: AI learning journey for developers and data scientists.</EM></P> <P>&nbsp;</P> <H1>Get started today</H1> <P>Check out the pages to get started with your 30-day learning journey and explore videos, tutorials, and other content:</P> <UL> <LI><A href="#" target="_blank" rel="noopener">AI Developer Resources</A></LI> <LI><A href="#" target="_blank" rel="noopener">Data Scientist Resources</A></LI> </UL> Tue, 13 Jul 2021 17:09:33 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/learn-how-to-add-ai-to-your-mission-critical-apps-in-four-weeks/ba-p/2543846 mernanashed 2021-07-13T17:09:33Z Accelerate PyTorch transformer model training with ONNX Runtime – a deep dive https://gorovian.000webhostapp.com/?exam=t5/azure-ai/accelerate-pytorch-transformer-model-training-with-onnx-runtime/ba-p/2540471 <P><EM>Authors: Ravi shankar Kolli (<LI-USER uid="484531"></LI-USER>) , Aishwarya Bhandare (@ashbhandare), M. Zeeshan Siddiqui (<LI-USER uid="1098974"></LI-USER>) , Kshama Pawar (<LI-USER uid="1099824"></LI-USER>)&nbsp;, Sherlock Huang (<LI-USER uid="670116"></LI-USER>) and the ONNX Runtime Training Team</EM></P> <P>&nbsp;</P> <H2>Why ONNX Runtime for PyTorch?</H2> <P>ONNX Runtime (ORT) for PyTorch accelerates training large scale models across multiple GPUs with up to 37% increase in training throughput over PyTorch and up to 86% speed up when combined with <A href="#" target="_blank" rel="noopener">DeepSpeed</A>. Today, transformer models are fundamental to Natural Language Processing (NLP) applications. These models with billions of parameters utilize multiple GPUs for distributed training. This large scale is costly and time consuming for pre-training and fine-tuning such complex models. Training with ONNX Runtime for PyTorch, through its&nbsp;<SPAN>torch_ort.ORTModule API,</SPAN> speeds up training through efficient memory utilization, highly-optimized computational graph, mixed precision execution, all through a quick and easy, couple-line change to existing PyTorch training scripts. It also provides hardware support for both Nvidia and <A href="#" target="_blank" rel="noopener">AMD GPUs</A> and extensibility with custom operators, optimizers and hardware accelerator support. ONNX Runtime for PyTorch empowers AI developers to take full advantage of the PyTorch ecosystem – with the flexibility of PyTorch and the performance using ONNX Runtime.</P> <P>&nbsp;</P> <H2>Flexibility in Integration</H2> <P>To use ONNX Runtime as the backend for training your PyTorch model, you begin by installing the torch-ort package and making the following 2-line change to your training script.&nbsp;<SPAN>ORTModule class is a simple wrapper for torch.nn.Module that optimizes the memory and computations required for training.</SPAN></P> <P>&nbsp;</P> <PRE>from torch_ort import ORTModule<BR />model = ORTModule(model)</PRE> <OL> <LI><EM>import torch_ort</EM> – allows you to access all the APIs and features of ONNX Runtime</LI> <LI><EM>model = torch_ort.ORTModule(model)</EM> – wraps the torch.nn.Module in the PyTorch training script with ORTModule to allow acceleration using ONNX Runtime</LI> </OL> <P>The rest of the training loop is unmodified. ORTModule can be flexibly composed with <A href="#" target="_self">torch.nn.Module</A>, allowing the user to wrap part or whole of the model to run with ORT. For instance, users can choose to wrap the encoder-decoder portion of the model while leaving the loss function in PyTorch. ORT will speed up the wrapped portion of the model.</P> <P>&nbsp;</P> <P>ONNX Runtime for PyTorch has been <A href="#" target="_self">integrated</A> to run popular Hugging Face models with a centralized code change. Additional installation instructions can be found at <A href="#" target="_blank" rel="noopener">pytorch/ort</A> and samples can be found at <A href="#" target="_blank" rel="noopener">ONNX Runtime Training Examples.</A></P> <P>&nbsp;</P> <H2>Performance Results</H2> <P>We have validated Hugging Face Transformer models with performance gains in samples/second ranging from 37% (baseline PyTorch) to 86% (combined with DeepSpeed) for different models for pre-training and fine-tuning scenarios. The performance for all runs was measured with models running on the <A href="#" target="_self">Azure NDv2 SKU</A> on a single node (except for the A100 results), with torch autocast as the mixed precision solution. Please refer to the configuration mentioned in the <A href="#" target="_blank" rel="noopener">onnx-runtime-training-examples</A> repo to reproduce the results.&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="kshamamsft_0-1626160648567.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295403i11DF200351A067D8/image-size/large?v=v2&amp;px=999" role="button" title="kshamamsft_0-1626160648567.png" alt="kshamamsft_0-1626160648567.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-left"><EM>Figure 1</EM><EM>: ORT throughput for PyTorch Hugging Face models</EM></P> <P class="lia-align-left">&nbsp;</P> <P>ONNX Runtime for PyTorch also supports <A href="#" target="_blank" rel="noopener">A100 Tensor Core GPU</A> showing upto 1.31x throughput improvements on some Hugging Face models as seen in Figure 2.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="kshamamsft_1-1626135034004.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295295i58DB07421EFAAD6C/image-size/large?v=v2&amp;px=999" role="button" title="kshamamsft_1-1626135034004.png" alt="kshamamsft_1-1626135034004.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-left"><EM>Figure 2</EM><EM>: ORT throughput on A100 Tensor Core GPU</EM></P> <P>&nbsp;</P> <P>The speedup is a result of various graph optimizations, use of fused and optimized GPU kernels, and efficient memory handling that ORT performs in the backend. We will discuss more details in the following sections.</P> <P>&nbsp;</P> <H2>10,000 foot view of ORTModule internals</H2> <P>&nbsp;</P> <H3><STRONG>Overview</STRONG></H3> <P>ORTModule is a python wrapper around torch.nn.Module (Figure 3, ORTModule wraps around torch.nn.Module) that intercepts forward and backward calls and delegates them to ONNX Runtime backend to achieve better training performance. ORTModule serves as a drop-in replacement for any torch.nn.Module for ease of use. Upon the initial forward call, the PyTorch module is exported to ONNX graph using <A href="#" target="_self">torch-onnx</A> exporter, which is then used to create a session. ORT’s <A href="#" target="_self">native auto-differentiation</A> is invoked during session creation by augmenting the forward graph to insert gradient nodes (backward graph). <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/onnx-runtime-training-technical-deep-dive/ba-p/1398310" target="_self">Static graph transformations</A> such as constant folding, redundant operation elimination, and operator fusions are further applied to optimize the computation graph. ORTModule backend uses highly optimized kernels to execute the graph with optimal use of the GPU resources.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-right" image-alt="kshamamsft_2-1626127015497.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295233i8FC24BEF8007B9AC/image-size/large?v=v2&amp;px=999" role="button" title="kshamamsft_2-1626127015497.png" alt="kshamamsft_2-1626127015497.png" /></span></P> <P class="lia-align-center"><EM>Figure 3: ORTModule execution flow</EM></P> <P>&nbsp;</P> <H3><FONT size="3"><STRONG>Partial Graph Execution</STRONG></FONT></H3> <P>PyTorch executes the model with separate forward and backward calls, whereas ORT represents the model as a single static computation graph. ORTModule implements a partial graph executor to mimic PyTorch’s forward and backward calls. Graph state, such as the stashed intermediate tensor, between a pair of forward and backward calls is captured and shared through RunContext (Figure 3).<BR /><BR /></P> <H3><STRONG>Tensor Exchange</STRONG></H3> <P>The tensors such as module input, outputs, gradients, etc. are exchanged between PyTorch and ORT using <A href="#" target="_self">DLPack</A> to avoid any memory copy.</P> <P>&nbsp;</P> <H3><FONT size="3"><STRONG>Unified Memory Allocator</STRONG></FONT></H3> <P>ORTModule uses PyTorch’s allocator for GPU tensor memory management. This is done to avoid having two allocators that can hide free memory from each other leading to inefficient memory utilization and reducing the maximum batch size that can be reached.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="kshamamsft_3-1626127015501.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295235iD14FEA81F4A459EE/image-size/medium?v=v2&amp;px=400" role="button" title="kshamamsft_3-1626127015501.png" alt="kshamamsft_3-1626127015501.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-left"><EM>Figure 4: Unified memory allocator</EM></P> <P>&nbsp;</P> <H2>Composability with Acceleration Libraries</H2> <P>&nbsp;</P> <H3><STRONG>Integration with DeepSpeed </STRONG></H3> <P>ONNX Runtime for PyTorch supports a seamless integration with <A href="#" target="_self">DeepSpeed</A> to further accelerate distributed training for increasingly large models. The gains compose well as we see significant gains over a variety of Hugging Face models with DeepSpeed and ORT. We see gains ranging from 58% to 86% for Hugging Face models over PyTorch by using DeepSpeed ZeRO Stage 1 with ORT (Figure 5). Currently, ORTModule supports composing with DeepSpeed FP16, ZeRO Stage 1 and 2. Further improvements for ZeRO Stage 2 are in progress.</P> <P>&nbsp;</P> <P class="lia-align-center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="kshamamsft_2-1626135650107.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295296iBB6861BF96E34DB7/image-size/large?v=v2&amp;px=999" role="button" title="kshamamsft_2-1626135650107.png" alt="kshamamsft_2-1626135650107.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-left"><EM>Figure 5</EM><EM>: ORT throughput improvements with DeepSpeed ZeRO Stage 1 </EM></P> <P>&nbsp;</P> <H3><STRONG>Mixed precision support</STRONG></H3> <P>ONNX Runtime supports mixed precision training with a variety of solutions like PyTorch’s native <A href="#" target="_self">AMP</A>, <A href="#" target="_self">Nvidia’s Apex O1</A>, as well as with <A href="#" target="_self">DeepSpeed FP16</A>. This allows the user with flexibility to avoid changing their current set up to bring ORT’s acceleration capabilities to their training workloads.</P> <P>&nbsp;</P> <P>Figure 1 in the "Performance results" section above shows speedup with ORT for Pytorch autocast. We see gains ranging from 11% to 37% by using ONNX Runtime for Pytorch.</P> <P>&nbsp;</P> <P>Figure 6 shows speedup with ORT combined with DeepSpeed FP16 against baseline PyTorch. We see gains ranging from 22% to 86% on popular Hugging Face models.</P> <P>&nbsp;</P> <P class="lia-align-center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="kshamamsft_3-1626135944666.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295297i035CCD02F68886E3/image-size/large?v=v2&amp;px=999" role="button" title="kshamamsft_3-1626135944666.png" alt="kshamamsft_3-1626135944666.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-left"><EM>Figure 6</EM><EM>: ORT throughput improvements with DeepSpeed FP16</EM></P> <P>&nbsp;</P> <P>Figure 7 shows speedup for using ORT with NVIDIA’s Apex O1, giving 8% to 23% gains over PyTorch.</P> <P>.</P> <P class="lia-align-center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="kshamamsft_4-1626136002444.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/295298iE63B2FADAEEE0208/image-size/large?v=v2&amp;px=999" role="button" title="kshamamsft_4-1626136002444.png" alt="kshamamsft_4-1626136002444.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-left"><EM>Figure 7</EM><EM>: ORT throughput improvements with Apex O1 mixed precision</EM></P> <P>&nbsp;</P> <H2>Looking Forward</H2> <P>The ONNX Runtime team is working on more exciting optimizations to make training large workloads even faster. ONNX Runtime for PyTorch plans to add support for custom <A href="#" target="_self">torch.autograd</A> functions which would allow the graph execution to switch back to PyTorch for user-defined autograd functions. This would allow us to support advanced scenarios like <A href="#" target="_self">horizontal parallelism</A> and Mixture of Expert models. Further improvements and support for DeepSpeed ZeRO Stage 2 and 3 have also been planned for future releases. The team will also be adding samples to the <A href="#" target="_self">ONNX Runtime Training Examples</A> in the future as we support more types of models and scenarios. Be sure to check out the links below to learn more and get started with ONNX Runtime for PyTorch!</P> <P>&nbsp;</P> <H2>Getting Started</H2> <UL> <LI><A href="#" target="_blank" rel="noopener">Overview of ONNX Runtime Training</A></LI> <LI><A href="#" target="_blank" rel="noopener">ONNX Runtime Training on AMD GPUs</A></LI> <LI><A href="#" target="_blank" rel="noopener">ONNX Runtime Training Github repo</A></LI> <LI><A href="#" target="_blank" rel="noopener">ONNX Runtime Training Examples</A></LI> <LI><A href="#" target="_self">ZeRO &amp; DeepSpeed<SPAN>: New system optimizations enable training models with over 100 billion parameters</SPAN></A></LI> </UL> Tue, 13 Jul 2021 16:04:06 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/accelerate-pytorch-transformer-model-training-with-onnx-runtime/ba-p/2540471 kshama-msft 2021-07-13T16:04:06Z Text Analytics for health, now generally available, unlocks clinical insights and analytics https://gorovian.000webhostapp.com/?exam=t5/azure-ai/text-analytics-for-health-now-generally-available-unlocks/ba-p/2531955 <P><SPAN>COVID-19 has a created an inflection point that is accelerating the use of AI in healthcare. More data was created in the last two years than in the previous 5,000 years of humanity. Alongside this trend, we see an acceleration of decision support applications that are based on extracting clinical insights and analytics from data. AI and Machine Learning play an important role in our ability to understand big data and learn from it.&nbsp;In the journey to transform healthcare, NLP technology for health &amp; life sciences has great potential, as we recently discussed <A href="#" target="_blank" rel="noopener">in this blog</A>.&nbsp;</SPAN></P> <P>&nbsp;</P> <P>Today we are announcing Text Analytics for Health as generally available with Text Analytics in Azure Cognitive Services. The service allows developers to process and extract insights from unstructured biomedical text, including various types of clinical notes, medical publications, electronic health records, clinical trial protocols, and more, expediting the ability to learn from this data and leverage it for secondary use.</P> <P>&nbsp;</P> <P>The service has been in<SPAN>&nbsp;</SPAN><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-text-analytics-for-health/ba-p/1505152" target="_self">preview since July 2020</A>&nbsp;supports enhanced information extraction capabilities, as follows:</P> <UL> <LI>Identifying medical concepts in text, determining boundaries and classification into domain-specific entities. Concepts include Diagnosis, Symptoms, Examination, Medications, and more. Recent additions to the GA service include expanding the Genomics category to enable extracting mutation types and expression in addition to identifying genes and variants. The current version of the service we are releasing as generally available contains 31 different entity types, and we will be increasing this in the future.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="hadasb_4-1625835488249.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294613i068E22AD45B4F8AF/image-size/large?v=v2&amp;px=999" role="button" title="hadasb_4-1625835488249.png" alt="hadasb_4-1625835488249.png" /></span></P> <P>&nbsp;</P> <UL> <LI>Associating medical entities with common ontology concepts from standard clinical coding systems, such as UMLS, SNOMED-CT, ICD9 and 10 etc.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hadasb_3-1625835405375.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294612i791D23F8D81DE996/image-size/large?v=v2&amp;px=999" role="button" title="hadasb_3-1625835405375.png" alt="hadasb_3-1625835405375.png" /></span></P> <UL> <LI>Identifying and extracting semantic relationships and dependencies between different entities to provide deeper understanding of the text, like Dosage of Medication or Variant or Gene. Recent additions made to the service toward its general availability include expanding the types of relationships, and the service now supports 35 different types.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hadasb_5-1625835552623.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294614iFC372296A6BE9E1E/image-size/large?v=v2&amp;px=999" role="button" title="hadasb_5-1625835552623.png" alt="hadasb_5-1625835552623.png" /></span></P> <UL> <LI>Assertion detection, to support better understanding of the context in which the entity appears in the text. The Assertions help you detect whether an entity appears in negated form, as possible, likely, unlikely (for example, “patients with possible NHL”); whether the mention is conditional, or mentioned in a hypothetical way (for example, “if patient has rashes (hypothetical), prescribe Solumedrol (conditional)", or whether something is mentioned in the context of someone else (for example, "patient's mother had history of breast cancer" does not mean the patient has breast cancer).</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="hadasb_0-1625835972519.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294617i3725A645139757DA/image-size/large?v=v2&amp;px=999" role="button" title="hadasb_0-1625835972519.png" alt="hadasb_0-1625835972519.png" /></span></P> <P>The service can be used synchronously and asynchronously and is available in most Azure regions, currently in English. The service can be used via a hosted endpoint or by downloading a container, to meet your specific security and data governance requirements. Either way, the service does not store the data it processes and is covered under the Azure compliance .</P> <P>&nbsp;</P> <P>During the last year, the service was available under a gated preview program. With today’s announcement on general availability, we are removing the gating off the service.&nbsp;</P> <P>&nbsp;</P> <P>Get started today:</P> <P>Review<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">Text Analytics for health documentation</A></P> <P>Learn more about&nbsp;<A href="#" target="_blank" rel="noopener noreferrer" data-event="page-clicked-link" data-bi-id="page-clicked-link" data-bi-an="body" data-bi-tn="undefined">Microsoft Cloud for Healthcare</A></P> Fri, 09 Jul 2021 13:27:10 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/text-analytics-for-health-now-generally-available-unlocks/ba-p/2531955 hadasb 2021-07-09T13:27:10Z Improved Environments Experience in Azure Machine Learning for Training and Inference https://gorovian.000webhostapp.com/?exam=t5/azure-ai/improved-environments-experience-in-azure-machine-learning-for/ba-p/2524202 <P><SPAN data-contrast="none">Tracking&nbsp;your project’s software dependencies is an integral part of the machine learning lifecycle.&nbsp;But&nbsp;managing these entities and&nbsp;ensuring reproducibility&nbsp;can be a challenging process leading to delays in the training and deployment of models. Azure Machine Learning Environments capture&nbsp;the Python packages&nbsp;and Docker settings for&nbsp;that are used in machine learning experiments, including in data preparation, training, and deployment to a web service.&nbsp;And we are excited to announce the following feature releases</SPAN><SPAN data-contrast="none">:</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <H3 aria-level="2"><SPAN data-contrast="none">Environments UI in Azure Machine Learning studio</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P>&nbsp;</P> <P><SPAN data-contrast="auto">The&nbsp;new&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Environments</SPAN></A><SPAN data-contrast="auto">&nbsp;UI in Azure Machine Learning studio&nbsp;is now in public preview</SPAN><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="none">Create and edit environments through the&nbsp;Azure Machine Learning&nbsp;studio</SPAN><SPAN data-contrast="none">.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="none">Browse&nbsp;custom and curated environments in&nbsp;your workspace.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><SPAN data-contrast="none">View</SPAN><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-contrast="none">details around properties, dependencies (Docker and Conda layers), and image build logs.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="8" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"><SPAN data-contrast="none">Edit tag and description along with the ability to rebuild existing environments.&nbsp;&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <P aria-level="2"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-07-07 130422.jpg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294070i6C92CC144DB0EE77/image-size/medium?v=v2&amp;px=400" role="button" title="Screenshot 2021-07-07 130422.jpg" alt="Screenshot 2021-07-07 130422.jpg" /></span></P> <P>&nbsp;</P> <P aria-level="2"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2021-07-07 130535.jpg" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294071i723D90097FD9EFE4/image-size/medium?v=v2&amp;px=400" role="button" title="Screenshot 2021-07-07 130535.jpg" alt="Screenshot 2021-07-07 130535.jpg" /></span></P> <P aria-level="2">&nbsp;</P> <H3 aria-level="2">&nbsp;</H3> <H3 aria-level="2"><SPAN data-contrast="none">Curated Environments</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Curated environments</SPAN></A><SPAN data-contrast="none">&nbsp;are provided by Azure Machine Learning and are available in your workspace by default. They are backed by cached Docker images that use the latest version of the Azure Machine Learning SDK&nbsp;and support popular machine learning frameworks and packages, reducing the run preparation cost and allowing for faster deployment time.&nbsp;Environment details as well as their&nbsp;Dockerfiles&nbsp;can be viewed through the Environments UI in the studio.&nbsp;Use these environments to quickly get started with&nbsp;PyTorch, Tensorflow, Sci-kit learn, and more.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <H3 aria-level="2"><SPAN data-contrast="none">Inference Prebuilt Docker Images</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P>&nbsp;</P> <P><SPAN data-contrast="auto">At Microsoft Build 2021 we announced Public Preview of&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Prebuilt docker images and curated environments for Inferencing workloads</SPAN></A><SPAN data-contrast="auto">.&nbsp;</SPAN><SPAN data-contrast="auto">These docker images come with popular machine learning frameworks and Python packages.&nbsp;These are&nbsp;optimized&nbsp;for inferencing only&nbsp;and&nbsp;provided for&nbsp;CPU and GPU based scenarios. They&nbsp;are published&nbsp;to&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Microsoft Container Registry (MCR)</SPAN></A><SPAN data-contrast="auto">. Customers can pull our images&nbsp;directly&nbsp;from MCR or use Azure Machine Learning curated environments. The complete list of&nbsp;inference images is documented here:&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">List of Prebuilt images and curated environments</SPAN></A><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">The difference&nbsp;between&nbsp;current base images and inference prebuilt docker images:</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">The prebuilt docker images run as non-root.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559737&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">The inference images are smaller in size than compared to&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">current base images</SPAN></A><SPAN data-contrast="auto">. Hence, improving the model deployment latency.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559737&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">If users want to add extra Python dependencies on top of our prebuilt&nbsp;images, they can do so without triggering an image build&nbsp;during model deployment. Our Python package extensibility solution&nbsp;provides two ways for customers to install these packages:</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559685&quot;:720,&quot;335559737&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259,&quot;335559991&quot;:360}">&nbsp;</SPAN> <UL> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Dynamic Installation</SPAN></A><SPAN data-contrast="auto">: This method is recommended for rapid prototyping.&nbsp;In this solution, we dynamically install extra python packages during container boot time.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559737&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN> <OL class="lia-list-style-type-lower-alpha"> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">&nbsp;Create a&nbsp;</SPAN><SPAN data-contrast="auto">requirements.txt</SPAN><SPAN data-contrast="auto">&nbsp;file alongside your&nbsp;</SPAN><SPAN data-contrast="auto">score.py</SPAN><SPAN data-contrast="auto">&nbsp;script.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559737&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">&nbsp;Add&nbsp;all&nbsp;your required packages to the&nbsp;</SPAN><SPAN data-contrast="auto">requirements.txt</SPAN><SPAN data-contrast="auto">&nbsp;file.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">&nbsp;Set the&nbsp;</SPAN><SPAN data-contrast="auto">AZUREML_EXTRA_REQUIREMENTS_TXT</SPAN><SPAN data-contrast="auto">&nbsp;environment variable in your Azure Machine Learning&nbsp;environment&nbsp;to the location of&nbsp;requirements.txt&nbsp;file.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> </LI> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Pre-installed Python packages</SPAN></A><SPAN data-contrast="auto">:&nbsp;This method is recommended for production deployments. In this solution, we mount the directory&nbsp;containing&nbsp;the&nbsp;packages.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN> <OL class="lia-list-style-type-lower-alpha"> <LI data-leveltext="%1." data-font="Calibri" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">&nbsp;Set&nbsp;</SPAN><SPAN data-contrast="auto">AZUREML_EXTRA_PYTHON_LIB_PATH</SPAN><SPAN data-contrast="auto">&nbsp;environment&nbsp;variable,&nbsp;and&nbsp;point&nbsp;it&nbsp;to the correct site packages directory.&nbsp;&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> </LI> </UL> </LI> </OL> <P><A href="#" target="_blank" rel="noopener"><IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/Prebuilt-Docker-Images-for-Inference/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Prebuilt Docker Images for Inference - Microsoft Channel 9 Video"></IFRAME> </A></P> <P><SPAN data-contrast="auto">Dynamic installation:&nbsp;</SPAN><BR /><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">https://docs.microsoft.com/en-us/azure/machine-learning/how-to-prebuilt-docker-images-inference-python-extensibility#dynamic-installation</SPAN></A><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <H3 aria-level="2"><SPAN data-contrast="none">Summary</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559738&quot;:40,&quot;335559739&quot;:0,&quot;335559740&quot;:259}">&nbsp;</SPAN></H3> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Use the environments to track and reproduce your projects' software dependencies as they evolve.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Environments</SPAN></A><SPAN data-contrast="auto">&nbsp;in Azure Machine Learning.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Manage software environments in Azure Machine Learning studio</SPAN></A><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Curated Environments</SPAN></A><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Prebuilt Docker Images and curated environments for Inference.</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Python Package Extensibility solution</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="6" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Extend a Prebuilt docker image</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> Thu, 08 Jul 2021 17:23:38 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/improved-environments-experience-in-azure-machine-learning-for/ba-p/2524202 saachigopal 2021-07-08T17:23:38Z Speech Service Update - Pronunciation Assessment is Generally Available https://gorovian.000webhostapp.com/?exam=t5/azure-ai/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501 <P><EM>This post was co-authored by Yinhe Wei, Runnan Li, Sheng Zhao, Qinying Liao, Yan Xia, and&nbsp;<SPAN class="TextRun BCX8 SCXW23971255" data-contrast="none"><SPAN class="NormalTextRun BCX8 SCXW23971255">Nalin Mujumdar</SPAN></SPAN></EM></P> <P><EM>&nbsp;</EM></P> <P><SPAN class="TextRun SCXW56966930 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW56966930 BCX8">An important element of language learning is being able to accurately pronounce words. Speech service on Azure supports<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW56966930 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW56966930 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW56966930 BCX8" data-ccp-charstyle="Hyperlink">Pronunciation Assessment</SPAN></SPAN></A><SPAN class="TextRun SCXW56966930 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW56966930 BCX8"><SPAN>&nbsp;</SPAN>to<SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW56966930 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW56966930 BCX8">empower language learners and educators more</SPAN></SPAN><SPAN class="TextRun SCXW56966930 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW56966930 BCX8">. At the<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW56966930 BCX8" href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-2021-azure-cognitive-services-speech-updates/ba-p/2384260" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW56966930 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW56966930 BCX8" data-ccp-charstyle="Hyperlink">//Build 2021</SPAN></SPAN></A><SPAN class="TextRun SCXW56966930 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW56966930 BCX8"><SPAN>&nbsp;</SPAN>conference, Pronunciation Assessment is announced generally available in US English, while other<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW56966930 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW56966930 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW56966930 BCX8" data-ccp-charstyle="Hyperlink">languages</SPAN></SPAN></A><SPAN class="TextRun SCXW56966930 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW56966930 BCX8"><SPAN>&nbsp;</SPAN>are available in preview.</SPAN></SPAN><SPAN class="EOP SCXW56966930 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW123036508 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW123036508 BCX8">The Pronunciation Assessment capability evaluates speech pronunciation and gives speakers feedback on the accuracy and fluency of the speech, allowing users to benefit from various aspects.</SPAN></SPAN><SPAN class="EOP SCXW123036508 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <H2><STRONG>Comprehensive evaluation near human experts</STRONG></H2> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW229509794 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW229509794 BCX8">Pronunciation Assessment, a feature of Speech in Azure Cognitive Services, provides subjective and objective feedback to language learners in<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW229509794 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW229509794 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW229509794 BCX8" data-ccp-charstyle="Hyperlink">computer-assisted language learning</SPAN></SPAN></A><SPAN class="TextRun SCXW229509794 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW229509794 BCX8">.&nbsp; For language learners, practicing pronunciation and getting timely feedback are essential for improving language skills.&nbsp; The assessment is conventionally driven by experienced teachers, which normally takes a<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW229509794 BCX8">lot of</SPAN><SPAN class="NormalTextRun SCXW229509794 BCX8"><SPAN>&nbsp;</SPAN>time and big efforts, making high-quality assessment expensive to learners.&nbsp; Pronunciation Assessment, a novel AI driven speech capability, is able to make language assessment more engaging and accessible to learners of all backgrounds.</SPAN></SPAN><SPAN class="EOP SCXW229509794 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW86121198 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW86121198 BCX8">Pronunciation Assessment provides various<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW86121198 BCX8">assessment&nbsp;</SPAN><SPAN class="NormalTextRun SCXW86121198 BCX8">results in different granularities, from individual phonemes to the entire text input. At the phoneme level, Pronunciation Assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech.&nbsp; At the word-level, Pronunciation Assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.&nbsp; At the full-text level, Pronunciation Assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech.&nbsp; With these features, learners can easily know the weakness of their speech, and improve with target goals.</SPAN></SPAN><SPAN class="EOP SCXW86121198 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW255755038 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW255755038 BCX8" data-ccp-parastyle="Normal (Web)">With Pronunciation Assessment, language learners can practice, get instant feedback, and improve their pronunciation. Online learning solution providers or educators can use the capability to evaluate pronunciation of multiple speakers in real-time.&nbsp;&nbsp;</SPAN></SPAN><SPAN class="EOP SCXW255755038 BCX8" data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=kTyB7YlxuKM" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/kTyB7YlxuKM/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-parastyle="Normal (Web)">Pearson uses Pronunciation Assessment in<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW205505896 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-charstyle="Hyperlink">Longman English Plus</SPAN></SPAN></A><SPAN class="TextRun SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>to empower both students and teachers to improve the productivity in language learning, with a personalized placement test feature and learning material recommendations for students at different levels. As the world’s leading learning company, Pearson enables tens of millions of learners every year to maximize their<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-parastyle="Normal (Web)">success</SPAN><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-parastyle="Normal (Web)">. Key technologies from Microsoft used in Longman English Plus are<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW205505896 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-charstyle="Hyperlink">Pronunciation Assessment,</SPAN></SPAN></A><SPAN class="TextRun SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW205505896 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-charstyle="Hyperlink">neural text-to-speech</SPAN></SPAN></A><SPAN class="TextRun SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW205505896 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-charstyle="Hyperlink">natural language processing</SPAN></SPAN></A><SPAN class="TextRun SCXW205505896 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW205505896 BCX8" data-ccp-parastyle="Normal (Web)">.</SPAN></SPAN><SPAN class="EOP SCXW205505896 BCX8" data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;Check below video for a demo of the Longman English learning app.&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN class="EOP SCXW205505896 BCX8" data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}"><LI-VIDEO vid="https://www.youtube.com/watch?v=gmbtJyldWCE" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/gmbtJyldWCE/hqdefault.jpg" external="url"></LI-VIDEO></SPAN></P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW157497797 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW157497797 BCX8"><A href="#" target="_blank">BYJU'S</A> chooses Speech service on Azure to build the<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW157497797 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW157497797 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW157497797 BCX8" data-ccp-charstyle="Hyperlink">English Language App (ELA)</SPAN></SPAN></A><SPAN class="TextRun SCXW157497797 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW157497797 BCX8"><SPAN>&nbsp;</SPAN>to their target geographies where English is used as the secondary language and is considered an essential skill to acquire. The app blends the best of pedagogy using state-of-the-art speech technology to help children gain command over language with ease in a judgement-free learning environment. With a conversation-first interface, this app enables students to learn, and practice English while working on their language skills in a fun, engaging and effective manner. BYJU’S is using the Speech<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW157497797 BCX8">to Text</SPAN><SPAN class="NormalTextRun SCXW157497797 BCX8"><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW157497797 BCX8">Pronunciation Assessment capabilities to ensure that children master English with ease - to practice speaking and receive feedback on pronunciation with phoneme, word and sentence-level pronunciation and fluency scores. BYJU'S ELA assesses pronunciation of students through speaking games, identifies areas of improvement, and provides personalized and adaptive lessons to help students improve in their weak areas.</SPAN></SPAN><SPAN class="EOP SCXW157497797 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=Ehc4v58IeDI" align="center" size="large" width="600" height="338" uploading="false" thumbnail="https://i.ytimg.com/vi/Ehc4v58IeDI/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <H2><STRONG>Mispronunciation&nbsp;</STRONG><STRONG>detection and&nbsp;</STRONG><STRONG>diagnosis</STRONG></H2> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW227743005 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW227743005 BCX8">Mispronunciation Detection and Diagnose (MDD) is the core technique employed in Pronunciation Assessment, scoring word-level pronunciation accuracy, which&nbsp;</SPAN><SPAN class="NormalTextRun ContextualSpellingAndGrammarErrorV2 SCXW227743005 BCX8">provides</SPAN><SPAN class="NormalTextRun SCXW227743005 BCX8"><SPAN>&nbsp;</SPAN>judgement on miscues and contributes to the overall assessment.&nbsp; To provide precise and consistent<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun ContextualSpellingAndGrammarErrorV2 SCXW227743005 BCX8">result</SPAN><SPAN class="NormalTextRun SCXW227743005 BCX8">, Pronunciation Assessment employs the latest powerful neural networks for modelling, exploiting information from lower<SPAN> senone&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW227743005 BCX8">granularity to higher word granularity with the use of hierarchical architecture.</SPAN><SPAN class="NormalTextRun SCXW227743005 BCX8"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW227743005 BCX8">This design enables Pronunciation Assessment to fully exploit the detailed pronunciation information from small patterns, making mispronunciation detection more accurate and robust.&nbsp; With 100,000+ hours training data on different accents, regions and ages, Pronunciation Assessment can also handle different scenarios with various users, for example, from kids to adults, from none-native speakers to native speakers, and provide trustable and consistent assessment performance.</SPAN></SPAN><SPAN class="EOP SCXW227743005 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><A class="Hyperlink BCX8 SCXW23079561" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined BCX8 SCXW23079561" data-contrast="none"><SPAN class="NormalTextRun BCX8 SCXW23079561" data-ccp-charstyle="Hyperlink">Teams Reading Progress</SPAN></SPAN></A><SPAN class="TextRun BCX8 SCXW23079561" data-contrast="none"><SPAN class="NormalTextRun BCX8 SCXW23079561">&nbsp;</SPAN></SPAN><SPAN class="TextRun BCX8 SCXW23079561" data-contrast="none"><SPAN class="NormalTextRun BCX8 SCXW23079561">uses Pronunciation Assessment to help students improve reading fluency, after the pandemic negatively affected students’ reading ability. It can be used inside and outside of the classroom to save teachers' time and improve learning outcomes for students. Learn<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink BCX8 SCXW23079561" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined BCX8 SCXW23079561" data-contrast="none"><SPAN class="NormalTextRun BCX8 SCXW23079561" data-ccp-charstyle="Hyperlink">how to get started</SPAN></SPAN></A><SPAN class="TextRun BCX8 SCXW23079561" data-contrast="none"><SPAN class="NormalTextRun BCX8 SCXW23079561">.</SPAN></SPAN></P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=z9g0-rzT8lE" align="center" size="large" width="600" height="338" uploading="false" thumbnail="http://i.ytimg.com/vi/z9g0-rzT8lE/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px"><EM>“Reading Progress is built on the solid scientific foundation of oral repeated reading and close monitoring by the educator. It allows educators to provide personal attention to each student while at the same time dealing with a whole classroom full of students.” </EM></P> <P class="lia-indent-padding-left-30px"><EM>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; — <STRONG>Tim Rasinski</STRONG>, Professor of Literacy Education at Kent State University</EM></P> <H2>&nbsp;</H2> <H2><STRONG>Cutting-edge free-style speech assessment</STRONG></H2> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW179088189 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW179088189 BCX8">Pronunciation Assessment also<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun ContextualSpellingAndGrammarErrorV2 SCXW179088189 BCX8">supports</SPAN><SPAN class="NormalTextRun SCXW179088189 BCX8"><SPAN>&nbsp;</SPAN>spontaneous speech scenarios.&nbsp; Spontaneous speech, also known as free-style talk, is the scenario where speakers are giving speech without any prefixed reference, like in presentation and spoken language examination.&nbsp; Empowered with Azure<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW179088189 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW179088189 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW179088189 BCX8" data-ccp-charstyle="Hyperlink">Speech-to-Text</SPAN></SPAN></A><SPAN class="TextRun SCXW179088189 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW179088189 BCX8">, Pronunciation Assessment can automatically transcribe a given speech accurately, and provide assessment result on<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun AdvancedProofingIssueV2 SCXW179088189 BCX8">aforementioned granularities</SPAN><SPAN class="NormalTextRun SCXW179088189 BCX8">.&nbsp;</SPAN></SPAN></P> <P>&nbsp;</P> <P><SPAN class="EOP SCXW179088189 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><SPAN class="TextRun SCXW122776657 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW122776657 BCX8">Pronunciation Assessment is used in<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW122776657 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW122776657 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW122776657 BCX8" data-ccp-charstyle="Hyperlink">PowerPoint Presenter Coach</SPAN></SPAN></A><SPAN class="TextRun SCXW122776657 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW122776657 BCX8"><SPAN>&nbsp;</SPAN>to advise presenters on the correct pronunciation of spoken words throughout their rehearsal. When Presenter Coach perceives that you may have mispronounced a word, it will display the word(s) and provide an experience that helps you practice pronouncing the word correctly. You’ll be able to listen to a recorded pronunciation guide of the word as many times as you’d like.</SPAN></SPAN><SPAN class="EOP SCXW122776657 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Melissa_Ma_1-1625117561666.png" style="width: 1005px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/292821i34C60F380653F20C/image-dimensions/1005x565?v=v2" width="1005" height="565" role="button" title="Melissa_Ma_1-1625117561666.png" alt="Melissa_Ma_1-1625117561666.png" /></span></P> <P>&nbsp;</P> <H2><STRONG>Get started</STRONG></H2> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW68984651 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW68984651 BCX8">To learn more and get started,</SPAN><SPAN class="NormalTextRun SCXW68984651 BCX8"><SPAN>&nbsp;</SPAN>you can first try out Pronunciation Assessment</SPAN><SPAN class="NormalTextRun SCXW68984651 BCX8"><SPAN>&nbsp;</SPAN>to evaluate a user’s fluency and pronunciation<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW68984651 BCX8">with the<SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW68984651 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW68984651 BCX8">no-code tool provided in<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW68984651 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW68984651 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW68984651 BCX8" data-ccp-charstyle="Hyperlink">Speech Studio</SPAN></SPAN></A><SPAN class="TextRun SCXW68984651 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW68984651 BCX8">, which<SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW68984651 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW68984651 BCX8">allows you to explore the Speech service with intuitive user interface.</SPAN><SPAN class="NormalTextRun SCXW68984651 BCX8"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW68984651 BCX8">You need an Azure account and a Speech service resource before you can use<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW68984651 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW68984651 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW68984651 BCX8" data-ccp-charstyle="Hyperlink">Speech Studio</SPAN></SPAN></A><SPAN class="TextRun SCXW68984651 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW68984651 BCX8">. If you don't have an account<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun ContextualSpellingAndGrammarErrorV2 SCXW68984651 BCX8">and</SPAN><SPAN class="NormalTextRun SCXW68984651 BCX8"><SPAN>&nbsp;</SPAN>subscription,<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW68984651 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW68984651 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW68984651 BCX8" data-ccp-charstyle="Hyperlink">try the Speech service for free</SPAN></SPAN></A><SPAN class="TextRun SCXW68984651 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW68984651 BCX8">.</SPAN></SPAN><SPAN class="EOP SCXW68984651 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Melissa_Ma_2-1625117580408.png" style="width: 1062px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/292822iFF0B915674C68210/image-dimensions/1062x551?v=v2" width="1062" height="551" role="button" title="Melissa_Ma_2-1625117580408.png" alt="Melissa_Ma_2-1625117580408.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW192833011 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW192833011 BCX8" data-ccp-parastyle="Normal (Web)">Here are more resources to help you</SPAN><SPAN class="NormalTextRun SCXW192833011 BCX8" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>add speech to your educational applications:</SPAN></SPAN><SPAN class="EOP SCXW192833011 BCX8" data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <UL> <LI>Try out our Pronunciation Assessment &nbsp;<A href="#" target="_blank" rel="noopener">demo</A></LI> <LI>Learn more with our&nbsp;<A href="#" target="_blank" rel="noopener">documentation</A>&nbsp;</LI> <LI>Check out easy-to-deploy&nbsp;<A href="#" target="_blank" rel="noopener">samples</A></LI> <LI>Watch the&nbsp;<A href="#" target="_blank" rel="noopener">video introduction</A>&nbsp;and&nbsp;<A href="#" target="_blank" rel="noopener">video tutorial</A></LI> <LI>Speech SDK reference <A href="#" target="_blank" rel="noopener">documentation</A></LI> <LI><A href="#" target="_blank" rel="noopener" data-linktype="external">Learn</A> how to create a free Azure account&nbsp;</LI> </UL> <P>Tags:</P> <UL> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/tag/pronunciation%20assessment/tg-p/board-id/AzureAIBlog" target="_blank" rel="noopener">Pronunciation Assessment </A></LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/tag/Remote%20Learning/tg-p/board-id/AzureAIBlog" target="_blank" rel="noopener">Remote Learning</A></LI> </UL> <P>&nbsp;</P> Tue, 06 Jul 2021 00:24:01 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501 Melissa_Ma 2021-07-06T00:24:01Z Deploy PyTorch models with TorchServe in Azure Machine Learning online endpoints https://gorovian.000webhostapp.com/?exam=t5/azure-ai/deploy-pytorch-models-with-torchserve-in-azure-machine-learning/ba-p/2466459 <P>With our <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/custom-containers-in-azure-machine-learning-managed-online/ba-p/2460558" target="_self">recent announcement</A> of support for custom containers in Azure Machine Learning comes support for a wide variety of machine learning frameworks and servers including TensorFlow Serving, R, and ML.NET. In this blog post, we'll show you how to deploy a PyTorch model using <A href="#" target="_self">TorchServe.</A></P> <P>The steps below reference our existing <A href="#" target="_self">TorchServe sample here</A>.</P> <H2>&nbsp;</H2> <H2>Export your model as a .mar file</H2> <P>To use TorchServe, you first need to export your model in the "Model Archive Repository" (.mar) format. Follow the <A href="#" target="_self">PyTorch quickstart</A> to learn how to do this for your PyTorch model.</P> <P>Save your .mar file in a directory called "torchserve."</P> <H2>&nbsp;</H2> <H2>Construct a Dockerfile</H2> <P>In the existing sample, we have a two-line Dockerfile:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">FROM pytorch/torchserve:latest-cpu CMD ["torchserve","--start","--model-store","$MODEL_BASE_PATH/torchserve","--models","densenet161.mar","--ts-config","$MODEL_BASE_PATH/torchserve/config.properties"]</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Modify this Dockerfile to pass the name of your exported model from the previous step for the "--models" argument.</P> <P>&nbsp;</P> <H2>Build an image</H2> <P>Now, build a Docker image from the Dockerfile in the previous step, and store this image in the Azure Container Registry associated with your workspace:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">WORKSPACE=$(az config get --query "defaults[?name == 'workspace'].value" -o tsv) ACR_NAME=$(az ml workspace show -w $WORKSPACE --query container_registry -o tsv | cut -d'/' -f9-) if [[ $ACR_NAME == "" ]] then echo "ACR login failed, exiting" exit 1 fi az acr login -n $ACR_NAME IMAGE_TAG=${ACR_NAME}.azurecr.io/torchserve:8080 az acr build $BASE_PATH/ -f $BASE_PATH/torchserve.dockerfile -t $IMAGE_TAG -r $ACR_NAME</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Test locally</H2> <P>Ensure that you can serve your model by doing a local test. You will need to have Docker installed for this to work. Below, we show you how to run the image, download some sample data, and send a test liveness and scoring request.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash"># Run image locally for testing docker run --rm -d -p 8080:8080 --name torchserve-test \ -e MODEL_BASE_PATH=$MODEL_BASE_PATH \ -v $PWD/$BASE_PATH/torchserve:$MODEL_BASE_PATH/torchserve $IMAGE_TAG # Check Torchserve health echo "Checking Torchserve health..." curl http://localhost:8080/ping # Download test image echo "Downloading test image..." wget https://aka.ms/torchserve-test-image -O kitten_small.jpg # Check scoring locally echo "Uploading testing image, the scoring is..." curl http://localhost:8080/predictions/densenet161 -T kitten_small.jpg docker stop torchserve-test </LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Create endpoint YAML</H2> <P>Create a YAML file that specifies the properties of the managed online endpoint you would like to create. In the example below, we specify the location of the model we will use as well as the Azure Virtual Machine size to use when deploying.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="yaml">$schema: https://azuremlsdk2.blob.core.windows.net/latest/managedOnlineEndpoint.schema.json name: torchserve-endpoint type: online auth_mode: aml_token traffic: torchserve: 100 deployments: - name: torchserve model: name: torchserve-densenet161 version: 1 local_path: ./torchserve environment_variables: MODEL_BASE_PATH: /var/azureml-app/azureml-models/torchserve-densenet161/1 environment: name: torchserve version: 1 docker: image: {{acr_name}}.azurecr.io/torchserve:8080 inference_config: liveness_route: port: 8080 path: /ping readiness_route: port: 8080 path: /ping scoring_route: port: 8080 path: /predictions/densenet161 instance_type: Standard_F2s_v2 scale_settings: scale_type: manual instance_count: 1 min_instances: 1 max_instances: 2</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Create endpoint</H2> <P>Now that you have tested locally and you have a YAML file, you can create your endpoint:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash">az ml endpoint create -f $BASE_PATH/$ENDPOINT_NAME.yml -n $ENDPOINT_NAME</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Send a scoring request</H2> <P>Once your endpoint finishes deploying, you can send it unlabeled data for scoring:</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash"># Get accessToken echo "Getting access token..." TOKEN=$(az ml endpoint get-credentials -n $ENDPOINT_NAME --query accessToken -o tsv) # Get scoring url echo "Getting scoring url..." SCORING_URL=$(az ml endpoint show -n $ENDPOINT_NAME --query scoring_uri -o tsv) echo "Scoring url is $SCORING_URL" # Check scoring echo "Uploading testing image, the scoring is..." curl -H "Authorization: {Bearer $TOKEN}" -T kitten_small.jpg $SCORING_URL</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Delete resources</H2> <P>Now that you have successfully created and tested your TorchServe endpoint, you can delete it.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="bash"># Delete endpoint echo "Deleting endpoint..." az ml endpoint delete -n $ENDPOINT_NAME --yes # Delete model echo "Deleting model..." az ml model delete -n $AML_MODEL_NAME --version 1</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Next steps</H2> <P>Read <A href="#" target="_self">our documentation</A> to learn more and see <A href="#" target="_self">our other samples</A>.</P> <P>&nbsp;</P> Mon, 21 Jun 2021 17:46:22 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/deploy-pytorch-models-with-torchserve-in-azure-machine-learning/ba-p/2466459 gopalv 2021-06-21T17:46:22Z Updates to Azure Arc-enabled Machine Learning https://gorovian.000webhostapp.com/?exam=t5/azure-ai/updates-to-azure-arc-enabled-machine-learning/ba-p/2465745 <P>&nbsp;</P> <P>Azure Machine Learning (AML) team is excited to announce the availability of Azure Arc-enabled Machine Learning (ML) public preview release. All customers of <A href="#" target="_blank" rel="noopener">Azure Arc-enabled Kubernetes</A> now can deploy AzureML extension release and bring AzureML to any infrastructure across multi-cloud, on-premises, and the edge using Kubernetes on their hardware of choice.</P> <P>&nbsp;</P> <P>The design for Azure Arc-enabled ML helps IT Operators leverage native Kubernetes concepts such as namespace, node selector, and resources requests/limits for ML compute utilization and optimization. By letting the IT operator manage ML compute setup, Azure Arc-enabled ML creates a seamless AML experience for data scientists who do not need to learn or use Kubernetes directly. Data scientists now can focus on models and work with tools such as Azure Machine Learning AML Studio, AML 2.0 CLI, AML Python SDK, productivity tools like Jupyter notebook, and ML frameworks like TensorFlow and PyTorch.</P> <P>&nbsp;</P> <H2>IT Operator experience – ML compute setup</H2> <P>Once Kubernetes cluster is up and running, IT Operator can follow 3 simple steps below to prepare cluster for AML workload:</P> <UL> <LI>Connect Kubernetes cluster to Azure via Azure Arc</LI> <LI>Deploy AzureML extension to Azure Arc-enabled cluster</LI> <LI>Create a compute target for Data Scientists to use</LI> </UL> <P>For the first two steps, IT Operator can simply run the following two CLI commands to accomplish:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="bozhong68_0-1624269935596.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/290244i30760EBE7410A9EB/image-size/medium?v=v2&amp;px=400" role="button" title="bozhong68_0-1624269935596.png" alt="bozhong68_0-1624269935596.png" /></span></P> <P>&nbsp;</P> <P>Once AzureML extension installation completes in cluster, you will see following AzureML pods running inside the :</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="bozhong68_1-1624269935663.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/290246i8B021C669B16A0E5/image-size/medium?v=v2&amp;px=400" role="button" title="bozhong68_1-1624269935663.png" alt="bozhong68_1-1624269935663.png" /></span></P> <P>&nbsp;</P> <P>With your cluster ready to take AML workload, now you can head over to AML Studio portal and create compute target for Data Scientists to use, see below AML Studio compute attach UI:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="bozhong68_2-1624269935671.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/290245i04F2ED59B884AFC7/image-size/medium?v=v2&amp;px=400" role="button" title="bozhong68_2-1624269935671.png" alt="bozhong68_2-1624269935671.png" /></span></P> <P>&nbsp;</P> <P>Note by clicking “New-&gt;Kubernetes (preview)”, the Azure Arc-enabled Kubernetes clusters will automatically appear in a dropdown list for IT Operator to attach. In the process of Studio UI attach operation, IT Operator could provide an optional JSON configuration file specifying namespace, node selector, and resources requests/limits to be used for the compute target being created. With these advanced configurations in compute target, IT Operator help Data Scientists target a subset of nodes such as GPU pool or CPU pool for training job, improves compute resource utilization and avoids fragmentation. For more information about creating compute targets using these custom properties, please refer to <A href="#" target="_blank" rel="noopener">AML documentation</A>.</P> <P>&nbsp;</P> <P>For upcoming Azure Arc-enabled ML update release, we plan to support compute target creation through CLI command as well, and ML compute setup experience will be simplified to following 3 CLI commands:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="bozhong68_3-1624269935673.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/290248i177F9283C042C9F6/image-size/medium?v=v2&amp;px=400" role="button" title="bozhong68_3-1624269935673.png" alt="bozhong68_3-1624269935673.png" /></span></P> <P>&nbsp;</P> <P>Note when connecting Kubernetes cluster to Azure via Azure Arc, IT Operator can also specify configuration setting to enable outbound proxy server. We are pleased to announce that Azure Arc-enabled Machine Learning fully supports model training on-premises with outbound proxy server connection.</P> <P>&nbsp;</P> <H2>Data Scientist experience – train models</H2> <P>Once the attached Kubernetes compute target is available, Data Scientist can discover the list of compute targets in AML Studio UI compute section. Data Scientist can choose a suitable compute target to submit training job, such as GPU compute target, or CPU compute target with proper resources requests for particular training job workload, such as the # of vCPU cores and memory. Data Scientist can submit job either through AML 2.0 CLI or AML Python SDK, in either case Data Scientist will specify compute target name at job submission time. Azure Arc-enabled ML supports the following built-in AML training features seamlessly:</P> <UL> <LI><A href="#" target="_blank" rel="noopener">Train models with AML 2.0 CLI</A> <UL> <LI><A href="#" target="_blank" rel="noopener">Basic Python training job</A></LI> <LI><A href="#" target="_blank" rel="noopener">Distributed training – PyTorch</A></LI> <LI><A href="#" target="_blank" rel="noopener">Distributed training – TensorFlow</A></LI> <LI><A href="#" target="_blank" rel="noopener">Distributed training – MPI</A></LI> <LI><A href="#" target="_blank" rel="noopener">Sweep hyperparameters</A></LI> </UL> </LI> <LI>Train models with AML Python SDK <UL> <LI><A href="#" target="_blank" rel="noopener">Configure and submit training run</A></LI> <LI><A href="#" target="_blank" rel="noopener">Tune hyperparameters</A></LI> <LI><A href="#" target="_blank" rel="noopener">Scikit-learn</A></LI> <LI><A href="#" target="_blank" rel="noopener">TensorFlow</A></LI> <LI><A href="#" target="_blank" rel="noopener">PyTorch</A></LI> </UL> </LI> <LI><A href="#" target="_blank" rel="noopener">Build and use ML pipelines including AML Designer pipeline support</A></LI> </UL> <P>For those Data Science professionals who have used AML Python SDK, existing AML Python SDK examples and notebooks, or your existing projects will work out-of-box with a simple change of compute target name in Python script. If you are not familiar with Python SDK yet, please refer to above links to get started.</P> <P>&nbsp;</P> <P>AML team is extremely excited that Azure Arc-enabled ML supports the latest and greatest <A href="#" target="_blank" rel="noopener">AML 2.0 CLI training job submission</A>, which is in public preview also. Train models with AML 2.0 CLI is simple and easy with following CLI command:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="bozhong68_4-1624269935673.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/290247iB787054C900CC954/image-size/medium?v=v2&amp;px=400" role="button" title="bozhong68_4-1624269935673.png" alt="bozhong68_4-1624269935673.png" /></span></P> <P>&nbsp;</P> <P>Let’s take a look at job YAML file:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="bozhong68_5-1624269935675.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/290249i4F5C3BD735F72CB0/image-size/medium?v=v2&amp;px=400" role="button" title="bozhong68_5-1624269935675.png" alt="bozhong68_5-1624269935675.png" /></span></P> <P>&nbsp;</P> <P>Note job YAML file specifies all training job needed resources and assets including training scripts and compute target. In this case, Data Scientist is using Azure Arc-enabled compute target created by IT Operator earlier. Running the job creation CLI command will submit job to Azure Arc-enabled Kubernetes cluster and opens AML Studio UI portal for Data Scientist to monitor job running status, analyze metrics, and examine logs. Please refer to <A href="#" target="_blank" rel="noopener">Train models with the 2.0 CLI</A> for more information and examples.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="bozhong68_6-1624269935685.png" style="width: 466px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/290250iAF8DD2C6A2A2BA8F/image-dimensions/466x237?v=v2" width="466" height="237" role="button" title="bozhong68_6-1624269935685.png" alt="bozhong68_6-1624269935685.png" /></span></P> <P>&nbsp;</P> <H2>Get started today</H2> <P>In this post, we provided status updates to Azure Arc-enabled Machine Learning and showed how IT Operator can easily setup and prepare Azure Arc-enabled Kubernetes cluster for AML workload, and how Data Scientist can easily train models with AML 2.0 CLI and Kubernetes compute target.</P> <P>&nbsp;</P> <P>To get started with Azure Arc-enabled ML for training public preview, visit <A href="#" target="_blank" rel="noopener">Azure Arc-enabled ML Training Public Preview Github repository</A>, where you can find detailed documentation for IT Operator and Data Scientist, and examples for you to try out easily. In addition, visit the official <A href="#" target="_blank" rel="noopener">AML documentation</A> to find more information.</P> <P>&nbsp;</P> <P>Azure Arc-enabled ML also supports model training with interactive job experience and debugging, which is in private preview. Please sign up <A href="#" target="_blank" rel="noopener">here</A> for interactive job private preview.</P> <P>&nbsp;</P> <P>After Data Scientist trains a model, ML Engineer or Model Deployment Pro can deploy the model with Azure Arc-enabled ML on the same Arc-enabled Kubernetes cluster, which is in private preview too. Please sign up <A href="#" target="_blank" rel="noopener">here</A> for inference private preview.</P> <P>&nbsp;</P> <P>Also, check out these additional great AML blog posts!</P> <UL> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-the-new-cli-and-arm-rest-apis-for-azure-machine/ba-p/2393447" target="_blank" rel="noopener">Announcing the new CLI and ARM REST APIs for Azure Machine Learning</A></LI> <LI><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-arc/run-azure-machine-learning-anywhere-on-hybrid-and-in-multi-cloud/ba-p/2170263" target="_blank" rel="noopener">Run Azure Machine Learning anywhere – on hybrid and in multi-cloud with Azure Arc</A></LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Fri, 02 Jul 2021 02:36:16 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/updates-to-azure-arc-enabled-machine-learning/ba-p/2465745 Bozhong_Lin 2021-07-02T02:36:16Z Custom containers in Azure Machine Learning managed online endpoints https://gorovian.000webhostapp.com/?exam=t5/azure-ai/custom-containers-in-azure-machine-learning-managed-online/ba-p/2460558 <P>Today, we are announcing the public preview of the ability to use custom Docker containers in <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-managed-endpoints-in-azure-machine-learning-for/ba-p/2366481" target="_self">Azure Machine Learning online endpoints</A>. In combination with <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-the-new-cli-and-arm-rest-apis-for-azure-machine/ba-p/2393447" target="_blank" rel="noopener">our new 2.0 CLI</A>, this feature enables you to deploy a custom Docker container while getting Azure Machine Learning online endpoints’ built-in monitoring, scaling, and alerting capabilities.</P> <P>&nbsp;</P> <P>Below, we walk you through how to use this feature to deploy <A href="#" target="_blank" rel="noopener">TensorFlow Serving</A> with Azure Machine Learning. The full code is available in our <A href="#" target="_blank" rel="noopener">samples repository.</A></P> <P>&nbsp;</P> <H1>Sample deployment with TensorFlow Serving</H1> <P>&nbsp;</P> <P>To deploy a TensorFlow model with TensorFlow Serving, first create a YAML file:</P> <P>&nbsp;</P> <LI-CODE lang="yaml">name: tfserving-endpoint type: online auth_mode: aml_token traffic: &nbsp; tfserving: 100 deployments: &nbsp; - name: tfserving &nbsp;&nbsp;&nbsp; model: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; name: tfserving-mounted &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; version: 1 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; local_path: ./half_plus_two &nbsp;&nbsp;&nbsp; environment_variables: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MODEL_BASE_PATH: /var/azureml-app/azureml-models/tfserving-mounted/1 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; MODEL_NAME: half_plus_two &nbsp;&nbsp;&nbsp; environment: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; name: tfserving &nbsp; &nbsp;&nbsp;&nbsp;&nbsp;version: 1 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; docker: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; image: docker.io/tensorflow/serving:latest &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; inference_config: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; liveness_route: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; port: 8501 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path: /v1/models/half_plus_two &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; readiness_route: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; port: 8501 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path: /v1/models/half_plus_two &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; scoring_route: &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; port: 8501 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; path: /v1/models/half_plus_two:predict &nbsp;&nbsp;&nbsp; instance_type: Standard_F2s_v2 &nbsp;&nbsp;&nbsp; scale_settings: &nbsp;&nbsp;&nbsp;&nbsp; scale_type: manual &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; instance_count: 1 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; min_instances: 1 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; max_instances: 2 </LI-CODE> <P>&nbsp;</P> <P>Then create your endpoint:</P> <P>&nbsp;</P> <LI-CODE lang="bash">az ml endpoint create -f endpoint.yml</LI-CODE> <P>&nbsp;</P> <P>And that’s it! You now have a scalable TensorFlow Serving endpoint running on Azure ML-managed compute.</P> <H1>Next steps</H1> <UL> <LI>Read <A href="#" target="_self">our documentation</A></LI> <LI>See the <A href="#" target="_blank" rel="noopener">sample with TorchServe</A></LI> <LI>Learn more about our&nbsp;<A href="#" target="_self">Azure-built inference images</A>.</LI> <LI>Look out for future samples showing ML.NET and R support</LI> </UL> Thu, 17 Jun 2021 20:48:42 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/custom-containers-in-azure-machine-learning-managed-online/ba-p/2460558 gopalv 2021-06-17T20:48:42Z Extract values and line items from invoices with Form Recognizer now generally available https://gorovian.000webhostapp.com/?exam=t5/azure-ai/extract-values-and-line-items-from-invoices-with-form-recognizer/ba-p/2414729 <P><FONT size="6"><STRONG>Extract values and line items from invoices with Form Recognizer</STRONG></FONT></P> <P><EM>Authors: Cha Zhang, Anatoly Ponomarev, Ben Ufuk Tezcan, Neta Haiby</EM></P> <P>&nbsp;</P> <P><FONT size="4"><STRONG>Invoice Automation</STRONG> is a key component for accounts payable processes. Companies often need to extract key value pairs such as ship to, bill to, total, invoice ID etc., and line items and details such as item name, item quantity, item price and more.</FONT></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Invoices1.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/286144iDB77A068E8B99874/image-size/large?v=v2&amp;px=999" role="button" title="Invoices1.png" alt="Invoices1.png" /></span></P> <P><FONT size="4">Processing payments is a tedious and complex process where invoices come in from various sources and in various formats. Unfortunately, companies do not have control of the generation process of these invoices, and there are significant variations among these invoices making automation very challenging. For example, extracting item details on invoices is one of the most complex problems as their structure differs and they can be displayed in various ways, see an example below.</FONT></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="tables.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/286145i06BFB92E7097F22F/image-size/large?v=v2&amp;px=999" role="button" title="tables.png" alt="tables.png" /></span></P> <P><FONT size="4">A typical invoice automation solution includes three major steps: document digitization, data extraction and a human in the loop for manual reviews to increase the accuracy. Due to the complexity of invoice layouts, existing solutions either resort to a full manual process, or require extensive efforts to build many templates for processing the large variations of invoices which is not scalable.</FONT></P> <P>&nbsp;</P> <P><FONT size="5"><STRONG>Form Recognizer Invoice API </STRONG></FONT></P> <P><FONT size="4">Recently there has been significant advances towards natural language processing (NLP), where researchers apply Transformer based language models such as BERT, pre-trained on billions of documents. These deep learning models achieve state-of-the-art results on many NLP problems, such as named entity recognition, question answering, text summarization, etc. Nevertheless, it is not straightforward to apply these models to invoice automation, for the following reasons:</FONT></P> <UL> <LI><FONT size="4">A large percentage of invoices are scanned, faxed, or captured with mobile cameras, hence optical character recognition (OCR) is required to extract the raw texts.</FONT></LI> <LI><FONT size="4">Even with OCR, the extracted texts are not continuous. Concatenating the texts and directly applying Transformer-based models leads to poor accuracy, because special structure of the content is lost during the process.</FONT></LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Invoice extraction.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/286195iF71171936E4531B5/image-size/large?v=v2&amp;px=999" role="button" title="Invoice extraction.png" alt="Invoice extraction.png" /></span></P> <P>&nbsp;</P> <P><FONT size="4">In the latest release of Form Recognizer, we offer a state-of-the-art, pre-built invoice extraction API with groundbreaking deep learning technology that combines both text and structure/visual information of the input documents. Similar to BERT, the model is pre-trained on a large quantity of documents. However, while BERT leverages only the text information, our pretraining also includes structure and visual information. &nbsp;After pre-training, the model is fine-tuned with carefully labeled invoices to achieve high accuracy on all fields and line items. Figure below shows a selected set of benchmark results on an internal invoice data set that contain 460 invoices never seen during training.</FONT></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Invoice benchmark numbers.JPG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/286163iB6D27CE7638BF36C/image-size/large?v=v2&amp;px=999" role="button" title="Invoice benchmark numbers.JPG" alt="Invoice benchmark numbers.JPG" /></span><FONT size="5"><STRONG>Easy and simple to use</STRONG></FONT></P> <P><FONT size="4">Get stared out with the <A title="Form Recognizer Sample Tool" href="#" target="_self">Form Recognizer Sample Tool</A> Extracting invoices from documents.</FONT></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Get started.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/286146i63D1852236C085F1/image-size/large?v=v2&amp;px=999" role="button" title="Get started.png" alt="Get started.png" /></span></P> <P>&nbsp;</P> <P><FONT size="4">It is also as simple as 2 API calls, no training, no preprocessing, or anything else needed. Just call the <A href="#" target="_blank" rel="noopener">Analyze Invoice&nbsp;operation</A> with your document (image, TIFF, or PDF file) as the input and extract the text, tables, invoice key value pairs and line items from your invoices. Form Recognizer invoices supports multipage PDFs and Tiff files, JPG, PNG and BMP file formats.</FONT></P> <P>&nbsp;</P> <P><FONT size="4"><STRONG>Form Recognizer Analyze Invoice API</STRONG></FONT></P> <P><FONT size="4"><STRONG>Step 1</STRONG>:<STRONG> The Analyze Layout Operation – </STRONG></FONT></P> <P><FONT size="4"><STRONG><A href="#" target="_blank" rel="noopener">https://{endpoint}/formrecognizer/v2.1/prebuilt/invoice/analyze[?includeTextDetails][&amp;locale][&amp;pages</A>]</STRONG></FONT></P> <P><FONT size="4">The Analyze Invoice call returns a response header field called&nbsp;Operation-Location. The&nbsp;Operation-Location&nbsp;value is a URL that contains the Result ID to be used in the next step.</FONT></P> <P><FONT size="4">Operation location - </FONT><BR /><FONT size="4"><EM><A href="#" target="_blank" rel="noopener">https://cognitiveservice/formrecognizer/v2.1-preview.3/prebuilt/invoice/analyzeResults/44a436324-fc4b-4387-aa06-090cfbf0064f</A></EM></FONT></P> <P><FONT size="4">Once you have the operation location call the <A href="#" target="_blank" rel="noopener">Get Analyze Invoice Result</A>&nbsp;operation. This operation takes as input the Result ID that was created by the Analyze Invoice operation. It returns a JSON response that contains a&nbsp;status&nbsp;field with the following possible values.</FONT></P> <P>&nbsp;</P> <P><FONT size="4"><STRONG>Step 2</STRONG>: <STRONG>The Get Analyze Layout Result Operation – </STRONG></FONT></P> <P><FONT size="4"><STRONG><A href="#" target="_blank" rel="noopener">https://{endpoint}/formrecognizer/v2.1/prebuilt/invoice/analyzeResults/{resultId}</A></STRONG></FONT></P> <P><FONT size="4">The output of the Get Analyze Invoice Results will provide a JSON output with the extracted invoice fields.</FONT></P> <P><FONT size="4">For example:</FONT></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Json output.png" style="width: 264px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/286196iB6272D6967C26333/image-size/medium?v=v2&amp;px=400" role="button" title="Json output.png" alt="Json output.png" /></span></P> <P>&nbsp;</P> <H2><STRONG>Get Started</STRONG></H2> <UL> <LI><A href="#" target="_blank" rel="noopener">Create a Computer Vision resource</A>&nbsp;in Azure.</LI> <LI><A href="#" target="_blank" rel="noopener">Try it out in the Form Recognizer Sample Tool UX follow the QuickStart</A></LI> <LI>Follow our&nbsp;<A href="#" target="_blank" rel="noopener">SDK and REST API QuickStarts</A>.</LI> <LI>Learn more about <A href="#" target="_blank" rel="noopener">Form Recognizer Invoices</A> and&nbsp;<A href="#" target="_blank" rel="noopener">Form Recognizer</A>.</LI> <LI>Write to us at&nbsp;<A href="https://gorovian.000webhostapp.com/?exam=mailto:formrecog_contact@microsoft.com" target="_blank" rel="noopener">formrecog_contact@microsoft.com</A></LI> </UL> <P>&nbsp;</P> Thu, 03 Jun 2021 23:31:09 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/extract-values-and-line-items-from-invoices-with-form-recognizer/ba-p/2414729 NetaH 2021-06-03T23:31:09Z Accelerate labeling productivity by using AML Data Labeling https://gorovian.000webhostapp.com/?exam=t5/azure-ai/accelerate-labeling-productivity-by-using-aml-data-labeling/ba-p/2396540 <P>Labeled data is critical to training supervised learning models. Higher volumes and more accurate labeled data contribute to more accurate models but labeling data has traditionally been time-intensive and error-prone.</P> <P><SPAN>With Data Labeling in Azure Machine Learning, you now have a central place to create, manage, and monitor labeling projects. You</SPAN><SPAN>&nbsp;</SPAN>can now manage data labeling projects seamlessly from within the studio web experience to generate and manage tasks reducing the back-and-forth of labelling data offline.&nbsp;With AML Data Labeling, you can load and label data and be ready to train in minutes.</P> <P><SPAN>To increase productivity and decrease costs for a given task, the Assisted Machine Learning labeling&nbsp;feature allows you to leverage automatic machine learning models to accelerate labeling by clustering like objectives and automatically prelabeling data when the underlying model has reached high confidence. This feature is available for image classification (multi-class or multi-label) and Object detection tasks, in Enterprise edition workspaces.</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_0-1622223574216.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284690i922533C36C86F8A4/image-size/medium?v=v2&amp;px=400" role="button" title="Vijai_Kannan_0-1622223574216.png" alt="Vijai_Kannan_0-1622223574216.png" /></span></P> <P>&nbsp;</P> <P>Data Labeling in Azure Machine learning now includes below capabilities:</P> <P>&nbsp;</P> <P><STRONG>Image Classification Multi-Class</STRONG></P> <P><SPAN>This project type helps you to categorize an image when you want to apply only a&nbsp;single class&nbsp;from a set of classes to an image.&nbsp;</SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_1-1622221112476.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284681i03D80A8FFD98E8AE/image-size/medium?v=v2&amp;px=400" role="button" title="Vijai_Kannan_1-1622221112476.png" alt="Vijai_Kannan_1-1622221112476.png" /></span></P> <P>&nbsp;</P> <P><STRONG>Image Classification Multi-label</STRONG></P> <P><SPAN>This project type allows you to categorize an image when you want to apply one or more&nbsp;labels from a set of classes to an image. For instance, a photo of a dog might be labeled with both&nbsp;dog&nbsp;and&nbsp;land.&nbsp;</SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_2-1622221112471.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284682iF7A3AA73E9508549/image-size/medium?v=v2&amp;px=400" role="button" title="Vijai_Kannan_2-1622221112471.png" alt="Vijai_Kannan_2-1622221112471.png" /></span></P> <P>&nbsp;</P> <P><STRONG>Object Identification (Bounding Box)</STRONG></P> <P><SPAN>Use this project type when you want to assign a class and a bounding box to each object within an image.&nbsp;If your project is of type "Object Identification (Bounding Boxes)," you'll specify one or more bounding boxes in the image and apply a tag to each box. Images can have multiple bounding boxes, each with a single tag.</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_0-1622223731607.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284692i901FDBC45A2BCB09/image-size/medium?v=v2&amp;px=400" role="button" title="Vijai_Kannan_0-1622223731607.png" alt="Vijai_Kannan_0-1622223731607.png" /></span></P> <P>&nbsp;</P> <P><STRONG>Instance Segmentation (Polygon)</STRONG></P> <P><SPAN>Use this project type when you want to assign a class and a polygon to each object within an image.&nbsp;If your project is of type "Instance Segmentation (Polygons)," you'll specify one or more polygons in the image and apply a tag to each object . Images can have multiple polygons, each with a single tag.</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_0-1622224137986.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284697i3364BAC87A2516E0/image-size/medium?v=v2&amp;px=400" role="button" title="Vijai_Kannan_0-1622224137986.png" alt="Vijai_Kannan_0-1622224137986.png" /></span></P> <P>&nbsp;</P> <P><STRONG>Medical Image Labeling</STRONG></P> <P>AzureML Data Labeling Services supports DICOM images; industry-standard formats for Healthcare AI Medical Image Labeling. The modality supported is X-ray film images. This would help medical staff like doctors and radiologists to annotate x-ray images in training machine learning models. The project types supported are Image Classification, Object detection (bounding boxes) and Instance Segmentation (Polygons).</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Vijai_Kannan_0-1625794079554.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/294528i8D740912B73CF47E/image-size/medium?v=v2&amp;px=400" role="button" title="Vijai_Kannan_0-1625794079554.png" alt="Vijai_Kannan_0-1625794079554.png" /></span></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-30px">&nbsp;<STRONG>! IMPORTANT</STRONG><BR />The capability to label DICOM or similar image types is not intended or made available for use as a medical device, clinical support, diagnostic tool, or other technology intended to be used in the diagnosis, cure, mitigation, treatment, or prevention of disease or other conditions, and no license or right is granted by Microsoft to use this capability for such purposes. This capability is not designed or intended to be implemented or deployed as a substitute for professional medical advice or healthcare opinion, diagnosis, treatment, or the clinical judgment of a healthcare professional, and should not be used as such. The customer is solely responsible for any use of Data Labeling for DICOM or similar image types.</P> <P class="lia-indent-padding-left-30px">&nbsp;</P> <H3 id="toc-hId-1174177655"><STRONG>Assisted machine learning</STRONG></H3> <P>&nbsp;</P> <P>The&nbsp;<STRONG>machine assisted labeling</STRONG>&nbsp;lets you trigger automatic machine learning models to accelerate the labeling task. At the beginning of your labeling project, the images are shuffled into a random order to reduce potential bias. However, any biases that are present in the dataset will be reflected in the trained model. For example, if 80% of your images are of a single class, then approximately 80% of the data used to train the model will be of that class. This training does not include active learning.</P> <P>&nbsp;</P> <P><EM>Enabling ML assisted labeling</EM>&nbsp;consists of two phases:</P> <UL> <LI>Clustering</LI> <LI>Prelabeling</LI> </UL> <P>The exact number of labeled images necessary to start assisted labeling is not a fixed number. This can vary significantly from one labeling project to another. ML Assisted Labeling uses a technique called&nbsp;<EM>Transfer Learning</EM>, and the pre-labeling will be triggered when sufficient confidence is achieved which varies based on the dataset.</P> <P>Since the final labels still rely on input from the labeler, this technology is sometimes called&nbsp;<EM>human in the loop</EM>&nbsp;labeling.</P> <H3>&nbsp;</H3> <H3 id="toc-hId-1854236025"><STRONG>Clustering</STRONG></H3> <P>After a certain number of labels are submitted manually, the machine learning model for image classification starts to group together similar images. These similar images are presented to the labelers on the same screen to speed up manual tagging. Clustering is especially useful when the labeler is viewing a grid of 4, 6, or 9 images.</P> <P>The clustering phase does not appear for object detection models.</P> <H3>&nbsp;</H3> <H3 id="toc-hId--1760672901"><STRONG>Prelabeling</STRONG></H3> <P>After enough image labels are submitted, a classification model is used to predict image tags. Or an object detection model is used to predict bounding boxes. The labeler now sees pages that contain predicted labels already present on each image. For object detection, predicted boxes are also shown. Accuracy will vary depending images, labels, the domain, and other factors. With Pre-Labeling, you can review the predictions before committing the labels. &nbsp;</P> <P>Once a machine learning model has been trained on your manually labeled data, the model is evaluated on a test set of manually labeled images to determine its accuracy at a variety of different confidence thresholds. This evaluation process is used to determine a confidence threshold above which the model is accurate enough to show pre-labels. The model is then evaluated against unlabeled data. Images with predictions more confident than this threshold are used for pre-labeling.</P> <H2>&nbsp;</H2> <H2 id="toc-hId-716336828"><STRONG>Resources</STRONG></H2> <P>Learn more about the&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">Azure Machine Learning service</A>.</P> <P><A title="Create labeling projects" href="#" target="_self">Create labeling projects</A>&nbsp;</P> <P><A title="Label Images" href="#" target="_self">Label Images</A>&nbsp;</P> <P>Get started with a&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">free trial of the Azure Machine Learning service</A>.</P> <P>&nbsp;</P> Tue, 20 Jul 2021 23:31:47 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/accelerate-labeling-productivity-by-using-aml-data-labeling/ba-p/2396540 Vijai_Kannan 2021-07-20T23:31:47Z Introducing Question Answering in Public Preview https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-question-answering-in-public-preview/ba-p/2374708 <P><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8">QnA Maker is an Azure Cognitive Service that allows you to create a conversational layer over your data in minutes.<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">As part of our<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW117981182 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW117981182 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW117981182 BCX8" data-ccp-charstyle="Hyperlink">AI at Scale</SPAN></SPAN></A><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN>initiative across Microsoft, we<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">are making<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">the latest breakthroughs in natural language understanding<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">available<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">across our products and within our Azure Services portfolio. Powered by our<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW117981182 BCX8" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW117981182 BCX8" data-contrast="none"><SPAN class="NormalTextRun SCXW117981182 BCX8" data-ccp-charstyle="Hyperlink">Turing</SPAN></SPAN></A><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8">natural language<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">model,<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun CommentStart SCXW117981182 BCX8">we are</SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN>excited</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"> to</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun CommentStart SCXW117981182 BCX8">introduc</SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">e</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8">the</SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN>new</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun Underlined SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8">Custom Question Answering</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN>feature</SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN>(public preview)</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8">with</SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN>Text Analytics</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8">.</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;<BR /><BR /></SPAN></SPAN></SPAN></P> <P><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8">Alongside Custom question answering, we are also introducing a<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">new<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">P</SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">rebuilt<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">Question<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">a</SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">nswering<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">capability<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">that</SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW117981182 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW117981182 BCX8">lets user</SPAN><SPAN class="NormalTextRun SCXW117981182 BCX8">s extract relevant answers to questions from a given passage of text.</SPAN></SPAN><SPAN class="EOP SCXW117981182 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-150px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="TA.PNG" style="width: 644px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282623i88BB840CD97819F8/image-dimensions/644x293?v=v2" width="644" height="293" role="button" title="TA.PNG" alt="TA.PNG" /></span></P> <P><STRONG><SPAN data-contrast="auto">Overview</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">Custom question answering supports&nbsp;all capabilities introduced with&nbsp;</SPAN><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575" target="_blank" rel="noopener"><SPAN data-contrast="none">QnA Maker managed</SPAN><SPAN data-contrast="none">.</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">You can</SPAN><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-contrast="auto">add unstructured files to your knowledge bases</SPAN><SPAN data-contrast="auto">&nbsp;with Custom question&nbsp;answering</SPAN><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="1" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">We are also introducing Prebuilt&nbsp;question answering&nbsp;capability that&nbsp;lets users extract relevant answers to questions from a given passage of text.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <P><STRONG><SPAN data-contrast="auto">Enabling Custom question answering feature</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">As <U>Custom question answering</U> is now a feature in Text Analytics, users should first visit the Text Analytics resource&nbsp;create blade on Azure portal.&nbsp;They will have the option to enable&nbsp;</SPAN><SPAN data-contrast="auto">Custom question answering feature&nbsp;</SPAN><SPAN data-contrast="auto">within the&nbsp;resource.&nbsp;Once&nbsp;the&nbsp;Custom question answering&nbsp;feature&nbsp;is selected,&nbsp;the&nbsp;user&nbsp;will update all details related to the feature in the create blade.&nbsp;On creation,&nbsp;a&nbsp;new Text Analytics resource with Custom question answering feature&nbsp;is deployed.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><SPAN class="TextRun SCXW265631647 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW265631647 BCX8"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="select-qna-feature-create-flow.png" style="width: 715px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282619i458F4079313DAE68/image-dimensions/715x345?v=v2" width="715" height="345" role="button" title="select-qna-feature-create-flow.png" alt="select-qna-feature-create-flow.png" /></span></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><SPAN class="TextRun SCXW265631647 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW265631647 BCX8"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="custom_qna_create_button.png" style="width: 565px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282642i9D1164444853B342/image-dimensions/565x700?v=v2" width="565" height="700" role="button" title="custom_qna_create_button.png" alt="custom_qna_create_button.png" /></span><BR /></SPAN></SPAN></SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><SPAN class="TextRun SCXW265631647 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW265631647 BCX8"><BR />If user</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">s</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">don’t</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;enable the feature on resource creation,&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">they</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;will have the option to enable the feature later&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">through</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;the&nbsp;</SPAN></SPAN><SPAN class="TextRun SCXW265631647 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW265631647 BCX8">Features</SPAN></SPAN><SPAN class="TextRun SCXW265631647 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;tab&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">of</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">the Text Analytics resource</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;blade</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">.</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">They</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;will also&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">be able&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">to d</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">isable&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">the&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">Custom question answering</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;feature&nbsp;</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">via</SPAN><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;the&nbsp;</SPAN></SPAN><SPAN class="TextRun SCXW265631647 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW265631647 BCX8">Features</SPAN></SPAN><SPAN class="TextRun SCXW265631647 BCX8" data-contrast="auto"><SPAN class="NormalTextRun SCXW265631647 BCX8">&nbsp;tab.</SPAN></SPAN><SPAN class="LineBreakBlob BlobObject DragDrop SCXW265631647 BCX8"><SPAN class="SCXW265631647 BCX8">&nbsp;</SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><SPAN class="LineBreakBlob BlobObject DragDrop SCXW265631647 BCX8"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="update-custom-qna-feature.png" style="width: 722px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282549i10D59F958CFF0B4D/image-dimensions/722x362?v=v2" width="722" height="362" role="button" title="update-custom-qna-feature.png" alt="update-custom-qna-feature.png" /></span></SPAN><SPAN class="EOP SCXW265631647 BCX8" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></SPAN></P> <P><STRONG><SPAN data-contrast="auto">Support for unstructured documents</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">With&nbsp;</SPAN><SPAN data-contrast="auto">Custom question answering&nbsp;(preview</SPAN><SPAN data-contrast="auto">), we are introducing the<U>&nbsp;</U></SPAN><U>ability to add unstructured files</U><SPAN data-contrast="auto"><U>&nbsp;</U>to the knowledgebases. Now the content author can ingest entire documents in the knowledgebase and when a user query is passed, a response is returned&nbsp;on searching the documents&nbsp;ingested.&nbsp;The content authors don’t need to manage&nbsp;QnA pairs for unstructured documents.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Take for instance the following&nbsp;unstructured&nbsp;document&nbsp;introducing Surface Products:&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">SurfaceBlog.&nbsp;</SPAN></A><SPAN data-contrast="auto">The user can ingest this document in the knowledge base and test for user queries from the text.&nbsp;We tested this with a few queries as shown below.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P class="lia-indent-padding-left-60px"><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="headset.PNG" style="width: 478px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282641i0C94FCECEA3E4C77/image-dimensions/478x716?v=v2" width="478" height="716" role="button" title="headset.PNG" alt="headset.PNG" /></span></SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="UnstructuredQuery1.png" style="width: 336px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282560i07F087E1F775D848/image-dimensions/336x720?v=v2" width="336" height="720" role="button" title="UnstructuredQuery1.png" alt="UnstructuredQuery1.png" /></span></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">Prebuilt Question Answering</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Prebuilt question answering</SPAN></A><SPAN data-contrast="none">&nbsp;provides&nbsp;users&nbsp;the capability to answer&nbsp;questions&nbsp;over a passage of text without having to create knowledge bases and&nbsp;manage additional storage.&nbsp;This functionality is provided as an API and can be used without having to learn the details about QnA Maker.&nbsp;Given a user query and a block of text/passage the API will return an answer and precise answer (if available).&nbsp;</SPAN><SPAN data-contrast="auto">We currently support the ability to pass text from 5 documents.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P class="lia-indent-padding-left-150px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Disha_Agarwal_0-1621588228086.jpg" style="width: 574px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282620i830D476CE57EA49B/image-dimensions/574x142?v=v2" width="574" height="142" role="button" title="Disha_Agarwal_0-1621588228086.jpg" alt="Disha_Agarwal_0-1621588228086.jpg" /></span></P> <P><STRONG><SPAN data-contrast="auto">API Updates</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">We have added a new preview&nbsp;release&nbsp;for v5.0 APIs</SPAN><SPAN data-contrast="auto">:&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">V5.0-preview.2.</SPAN></A><STRONG><SPAN data-contrast="auto">&nbsp;</SPAN></STRONG><SPAN data-contrast="auto">Users should access this version for unstructured documents&nbsp;support&nbsp;and&nbsp;pre</SPAN><SPAN data-contrast="auto">-</SPAN><SPAN data-contrast="auto">built&nbsp;question answering</SPAN><SPAN data-contrast="auto">.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">Pricing</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Custom question answering feature is free in&nbsp;public preview. However,&nbsp;users&nbsp;will be charged for&nbsp;</SPAN><I><SPAN data-contrast="auto">Azure cognitive search</SPAN></I><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-contrast="auto">they&nbsp;add to the feature&nbsp;as per the tier selected.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">Note&nbsp;to&nbsp;existing QnA Maker managed users</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <OL> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-qna-maker-managed-now-in-public-preview/ba-p/1845575" target="_blank" rel="noopener"><SPAN data-contrast="none">QnA Maker managed&nbsp;</SPAN></A><SPAN data-contrast="auto">has been&nbsp;re-introduced as&nbsp;Custom question answering feature in Text Analytics</SPAN><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">Going forward users&nbsp;will not&nbsp;be able to create a new QnA Maker managed resource.&nbsp;All new resources can be created by&nbsp;enabling&nbsp;Custom question answering feature with Text Analytics</SPAN><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="none">All existing QnA Maker managed (preview) resources continue to work as before.&nbsp;The resource settings, portal, endpoints, keys, SDK, etc.&nbsp;</SPAN><SPAN data-contrast="auto">pertaining to existing QnA Maker managed resources will work as is.&nbsp;</SPAN><SPAN data-contrast="none">There is no action required&nbsp;from the user</SPAN><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-contrast="none">at this point in time</SPAN><SPAN data-contrast="none">.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">We will continue to support&nbsp;</SPAN><STRONG><SPAN data-contrast="auto">v5.0-preview.1</SPAN></STRONG><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-contrast="auto">for existing QnA Maker managed customers.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI><SPAN data-contrast="auto">If the users choose to migrate to Custom Question answering, they can do so. To migrate from QnA Maker managed to Custom question answering, users can create a Text Analytics resource with Custom question answering feature enabled and&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">migrate the knowledge bases</SPAN></A><SPAN data-contrast="auto">&nbsp;from QnA Maker managed to the Text Analytics resource.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="none">Custom question answering (preview) continues to be offered&nbsp;in&nbsp;free public preview.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="none">Custom question answering (preview) is available in the following regions:</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN> <UL> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="none">South Central US</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="none">North Europe</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="%1." data-font="游明朝" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="none">Australia East</SPAN></LI> </UL> </LI> </OL> <P><STRONG><SPAN data-contrast="auto">References</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="4" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Set up a QnA Maker service - QnA Maker - Azure Cognitive Services | Microsoft Docs</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="4" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Configure QnA Maker service - QnA Maker - Azure Cognitive Services | Microsoft Docs</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="4" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Manage knowledge bases - QnA Maker - Azure Cognitive Services | Microsoft Docs</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="4" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Limits and boundaries - QnA Maker - Azure Cognitive Services | Microsoft Docs</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="4" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Prebuilt API</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> Tue, 01 Jun 2021 06:14:11 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-question-answering-in-public-preview/ba-p/2374708 Disha_Agarwal 2021-06-01T06:14:11Z Cloud Adoption Framework – Innovate with AI best practices https://gorovian.000webhostapp.com/?exam=t5/azure-ai/cloud-adoption-framework-innovate-with-ai-best-practices/ba-p/2389626 <P><SPAN data-contrast="auto">Customers are looking to gain insight and value from their data in achieving their business outcome and have industry knowledge and domain expertise to build resilient data culture and customer capability. Advance analytics and AI play a pivotal role in accelerating the&nbsp;digital transformation journey.&nbsp; With the advances in powerful machine learning algorithms, democratization of computing power through cloud computing, and ever reducing cost of storage and accessible to vast amount of training data, new and sophisticated AI systems are emerging today.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">So, how can we adopt AI at scale?&nbsp;Bringing all the experience that we have built internally at&nbsp;Microsoft&nbsp;and working along&nbsp;with&nbsp;our customers, and democratize the use of people, process, and technologies in a secure and responsible way, through the&nbsp;lens of&nbsp;enabling AI using Microsoft&nbsp;Cloud&nbsp;Adoption Framework.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">So, let’s start by defining what is Cloud Adoption Framework (CAF).&nbsp;CAF is a collection of documentation, technical guidance, best practice, and tools. Ultimately, its goal is to enable your organization to achieve the desired business outcomes faster and adopt the cloud in a more holistic way. The objective of Enabling AI solutions using CAF is to help you align your thinking and language you are using with wider cloud adoption efforts. It will help to accelerate the delivery of your AI projects by aligning people, process, and technology with an actionable,&nbsp;efficient,&nbsp;and comprehensive way.  In particularly, it is looking to address the following challenges.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:2,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559739&quot;:160,&quot;335559740&quot;:285}">&nbsp;</SPAN></P> <P>&nbsp;</P> <TABLE border="1" width="103.82858126799145%"> <TBODY> <TR> <TD width="84.79170609635565%"> <P><SPAN class="TextRun SCXW50362169 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW50362169 BCX0">At Microsoft, we have been innovating on behalf of our customers. We have many services, feature</SPAN><SPAN class="NormalTextRun SCXW50362169 BCX0">s</SPAN><SPAN class="NormalTextRun SCXW50362169 BCX0">, and functionality available for<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW50362169 BCX0">D</SPAN><SPAN class="NormalTextRun SCXW50362169 BCX0">ata<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW50362169 BCX0">S</SPAN><SPAN class="NormalTextRun SCXW50362169 BCX0">cience and AI. Despite the flexibility and options, we understand that simplicity is important.&nbsp;</SPAN></SPAN></P> <P><SPAN class="TextRun SCXW50362169 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW50362169 BCX0">Enabling AI for CAF does exactly that, and in a prescriptive way to make AI adoption easy for organizations, making it easy to see return of AI investment quicker and gain accelerated business outcomes.<SPAN class="EOP SCXW50362169 BCX0" data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:2,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:285}">&nbsp;</SPAN></SPAN></SPAN></P> </TD> <TD width="37.790991104662154%"> <P><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="Pratim Das.png" style="width: 103px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284177i3843FD445E3F41AB/image-dimensions/103x128?v=v2" width="103" height="128" role="button" title="Pratim Das.png" alt="Pratim Das.png" /></span><FONT size="1 2 3 4 5 6 7"><EM><SPAN style="background-color: transparent; font-family: inherit; font-size: x-small;">Pratim Das (Director, Data &amp;AI, CSU)</SPAN></EM></FONT></P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">What is the challenge&nbsp;these addresses?</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">First, how do&nbsp;you&nbsp;operationalize machine learning, what approach do&nbsp;you&nbsp;take to achieve ML Operationalization?&nbsp; This is an industry wide challenge and requires detailed thinking about the people,&nbsp;process&nbsp;and technology, which is commonly referred to as&nbsp;the&nbsp;MlOps&nbsp;process.&nbsp;MlOps&nbsp;amalgamates&nbsp;the&nbsp;three dimensions to provide an end-to-end&nbsp;enterprise scale&nbsp;machine&nbsp;learning&nbsp;operating&nbsp;motion&nbsp;in an iterative manner. Cloud adoption for AI provides guidelines related to&nbsp;environment/workspace&nbsp;provisioning, roles, responsibilities,&nbsp;process,&nbsp;and technology to facilitate&nbsp;MlOPs&nbsp;in an enterprise ready way.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Second,&nbsp;how do you determine the appropriate training and deployment compute instances for your machine learning model?&nbsp;The choice of compute instance can have an implication on the&nbsp;performance efficiency, scalability, as well as the&nbsp;cost.&nbsp; Once a model is produced, it is important to choose the correct inference target to meet the business requirements. How do you choose the correct inference target that handles the scalability, security and response time need? What is the decision process to target the correct&nbsp;compute instance and inference&nbsp;path is addressed&nbsp;by the CAF&nbsp;for&nbsp;AI.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Third, how do&nbsp;you&nbsp;achieve machine learning security, that not only facilitates&nbsp;keeping your data secure on transit and rest, but restricts in-bound and out bound traffics, both within and outside of the virtual network.&nbsp;&nbsp;On top of that, be able to provision various level of access control using RBAC and enforce policies.&nbsp; More importantly, be able to run experiments on PII and confidential data, without compromising the privacy and integrity of the data.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Finally, how do you&nbsp;</SPAN><SPAN>ensure you implement&nbsp;</SPAN><SPAN>do&nbsp;</SPAN><SPAN data-contrast="auto">responsible and trusted AI?&nbsp; This incorporates principle of fairness, reliability, safety, privacy and security, inclusiveness transparency and accountability.&nbsp; An AI system needs to be reasonably be able to justify the decision it has made, and how it&nbsp;came to the conclusion. As well as the people who design and deploy the AI system need to be accountable for the action or decision it takes.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">What&nbsp;</SPAN></STRONG><SPAN><STRONG>are&nbsp;</STRONG></SPAN><STRONG><SPAN data-contrast="auto">the assets</SPAN></STRONG><SPAN><STRONG>&nbsp;</STRONG></SPAN><SPAN><STRONG>av</STRONG></SPAN><SPAN><STRONG>ailable</STRONG></SPAN><SPAN><STRONG>&nbsp;</STRONG></SPAN><SPAN><STRONG>are</STRONG></SPAN><STRONG><SPAN data-contrast="auto">?</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">The following assets are&nbsp;available&nbsp;to address&nbsp;above&nbsp;challenges.&nbsp; The&nbsp;web&nbsp;contents are&nbsp;organized into&nbsp;four buckets: AI Ops,&nbsp;AI training/inferencing, AI security and Responsible &amp; Trust AI.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN class="TextRun MacChromeBold SCXW259846973 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW259846973 BCX0" data-ccp-parastyle="Normal (Web)">We</SPAN><SPAN class="NormalTextRun SCXW259846973 BCX0" data-ccp-parastyle="Normal (Web)">b contents</SPAN></SPAN><SPAN class="EOP SCXW259846973 BCX0" data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></STRONG></P> <P>&nbsp;</P> <P><SPAN class="EOP SCXW259846973 BCX0" data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">The Web contents can be accessed by visiting the the following link (<A href="#" target="_self">click here</A>)</SPAN></P> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2021-05-27 at 00.24.59.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284159iA835137560EA87BE/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2021-05-27 at 00.24.59.png" alt="Screenshot 2021-05-27 at 00.24.59.png" /></span></A></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">Videos&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">There&nbsp;are&nbsp;supplementary videos&nbsp;which&nbsp;provides&nbsp;end-to-end&nbsp;overview of&nbsp;the CAF innovate with AI more from a&nbsp;holistic perspective.&nbsp;&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">We recommend starting with the&nbsp;introduction video.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:6,&quot;335551620&quot;:6,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CAF - introduction.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284168iC007438FAF61048E/image-size/large?v=v2&amp;px=999" role="button" title="CAF - introduction.png" alt="Introduciton" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Introduciton</span></span></A></P> <P>The introduction video outlines various concepts, terminology and building blocks for CAF.&nbsp;</P> <P>CAF provides six pillars, which are executed in the chorological order, starting with the strategy, then plan, followed by ready and adopt, and finally govern and manage. Each video provides a detailed overview of activities that are undertaken and accomplished.&nbsp; Please click on each pillar below to view the videos.</P> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CAF - strategy.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/287113i9631F747FD89295F/image-size/large?v=v2&amp;px=999" role="button" title="CAF - strategy.png" alt="Strategy" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Strategy</span></span></A></P> <DIV id="tinyMceEditormufy_23" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CAF - Plan.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284170i4A6F371B9258F2A0/image-size/large?v=v2&amp;px=999" role="button" title="CAF - Plan.png" alt="Plan" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Plan</span></span></A></P> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CAF - Ready.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284176i9724EBE41E14FE94/image-size/large?v=v2&amp;px=999" role="button" title="CAF - Ready.png" alt="Ready" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Ready</span></span></A></P> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CAF - Adopt.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284172i7FB430F784C9EDE0/image-size/large?v=v2&amp;px=999" role="button" title="CAF - Adopt.png" alt="Adopt" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Adopt</span></span></A><STRONG><BR /></STRONG></P> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CAF - govern.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284173iFC9F7B5A0071DA14/image-size/large?v=v2&amp;px=999" role="button" title="CAF - govern.png" alt="Govern" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Govern</span></span></A></P> <P><A href="#" target="_self"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CAF - Manage.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284174i5BBDC8E6E0FC7BD3/image-size/large?v=v2&amp;px=999" role="button" title="CAF - Manage.png" alt="Manage" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Manage</span></span></A></P> <P>&nbsp;</P> <P><STRONG>When and how to use them</STRONG></P> <P>&nbsp;</P> <P>Every organization should consider adopting the CAF – Innovating with AI as a first principle for any AI-based workload.&nbsp; This would enable organizations to establish recommended operational processes and tools with best practice guidelines. &nbsp;</P> <P>&nbsp;</P> <P>As a starting point it is important to get familiar with various terminologies and concepts underpinned by the best practices. It is therefore recommended to go through all the videos first, before covering through the web contents. &nbsp;The Web contents should act as a reference point throughout the lifecycle of a project/workload.</P> <P>&nbsp;</P> <P>Co-authors: Donna Forlin,&nbsp; Pratim Das and William Mendoza</P> <P>&nbsp;</P> <P>&nbsp;</P> Tue, 08 Jun 2021 09:03:12 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/cloud-adoption-framework-innovate-with-ai-best-practices/ba-p/2389626 mufy 2021-06-08T09:03:12Z Announcing the new CLI and ARM REST APIs for Azure Machine Learning https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-the-new-cli-and-arm-rest-apis-for-azure-machine/ba-p/2393447 <P>At Microsoft Build 2021 we launched the public preview of 2.0 CLI and REST APIs for Azure Machine Learning, enabling users to accelerate the iterative model training and deployment process while tracking the model lifecycle, enabling a complete MLOps experience.</P> <P>&nbsp;</P> <P><SPAN>Azure Machine Learning (Azure ML) has evolved organically over the past few years. With our &nbsp;2.0 CLI&nbsp;</SPAN><SPAN>and ARM REST APIs, we offer a streamlined experience for model training and deployment optimized for ISVs and ML professionals.</SPAN></P> <P>&nbsp;</P> <H2 id="toc-hId-2024213469">What's new?</H2> <P>&nbsp;</P> <H3 id="toc-hId--1580192353"><SPAN>Announcing the 2.0 CLI, backed by durable ARM APIs</SPAN></H3> <P><SPAN>The&nbsp;<EM>ml</EM>&nbsp;extension to the Azure CLI is the improved interface for Azure Machine Learning users. It enables you to train and deploy models from the command line, with features that accelerate scaling the data science process up and out, all while tracking the model lifecycle.</SPAN></P> <P>&nbsp;</P> <P>Using the CLI enables you to run distributed training jobs on GPU compute, automatically sweep hyperparameters to improve your results, and then monitor jobs in the AML studio user interface to see all details including important metrics, metadata and artifacts like the trained model, checkpoints and logs.</P> <P>&nbsp;</P> <P><SPAN>Additionally, the CLI is optimized to support YAML-based job, endpoint, and asset specifications to enable users to create, manage, and deploy models with proper CI/CD (or GitOps) best practices for an end-to-end MLOps solution.</SPAN></P> <P><SPAN>To get started with the 2.0 machine learning CLI extension for Azure, please check the&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">link here</A><SPAN>&nbsp;</SPAN><SPAN>.</SPAN></P> <P>&nbsp;</P> <H3 id="toc-hId-907320480"><SPAN>Streamlined concepts&nbsp;</SPAN></H3> <P><A href="#" target="_blank" rel="noopener noreferrer">Train models (create jobs) with the 2.0 CLI - Azure Machine Learning | Microsoft Docs</A></P> <P><A href="#" target="_blank" rel="noopener noreferrer">What are endpoints (preview) - Azure Machine Learning | Microsoft Docs</A></P> <P>&nbsp;</P> <P>&nbsp;</P> <H3 id="toc-hId--900133983">Job</H3> <P>A job in Azure ML enables you to prepare and train machine learning models. It enables you to configure:</P> <UL> <LI><SPAN>What to run: your code</SPAN></LI> <LI><SPAN>How to run it: either an optimized prebuilt docker container from AML or one of your choice from your own docker registry</SPAN></LI> <LI><SPAN>Where to run it: either fully managed, scalable compute in Azure or locally on your desktop</SPAN></LI> </UL> <P>&nbsp;</P> <H4 id="toc-hId--209572509"><SPAN>Train a machine learning model by creating a training job</SPAN></H4> <P><SPAN>Here is an example training job which invokes the user’s python script from a local directory and automatically mounts data in Azure Storage.</SPAN></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JordanE_0-1622148430767.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284412iA37A5CD4519410AD/image-size/medium?v=v2&amp;px=400" role="button" title="jordane316_2-1622146564844.png" alt="jordane316_2-1622146564844.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H4 id="toc-hId--2017026972">Easy to optimize the model training process with Sweep Jobs</H4> <P><SPAN>Azure Machine Learning enables you to tune the hyperparameters more efficiently for your machine learning models. You can configure a hyperparameter tuning job, called a sweep job, and submit it via the CLI. For more information on Azure Machine Learning's hyperparameter tuning offering, see the Hyperparameters tuning a model.</SPAN></P> <P>&nbsp;</P> <P><SPAN>You can modify the job.yml into job-sweep.yml to sweep over hyperparameters:</SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JordanE_1-1622148430767.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284410iEE45EDD6E95E2EBB/image-size/medium?v=v2&amp;px=400" role="button" title="jordane316_1-1622146538133.png" alt="jordane316_1-1622146538133.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H3 id="toc-hId--2027530076"><SPAN>VS Code support for job authoring and resource creation</SPAN></H3> <P>The<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">Azure Machine Learning extension</A><SPAN>&nbsp;</SPAN>for VS Code has been revamped for 2.0 CLI compatibility, with added features such as completions and diagnostics for your YAML-based specification files. You can continue to manage resources directly from within the editor and create new ones with starting templates. Within the template files, you can use the extension language support to fetch completions, previews, and diagnostics for machine learning resources in your workspace.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="JordanE_2-1622148430845.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/284436i61E5ADC90ADC8D30/image-size/medium?v=v2&amp;px=400" role="button" title="JordanE_2-1622148430845.png" alt="JordanE_2-1622148430845.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Once you are finished authoring the specification file, you can submit via the CLI from directly within VS Code (tip: right-click in the file itself to view the ‘Azure ML: Create Resource’ command). The extension will streamline invoking the right CLI commands on your behalf. To get started with the extension for creating, authoring, and submitting 2.0 CLI specification files, please<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">follow this documentation</A>.</P> <P>&nbsp;</P> <H3 id="toc-hId-459982757">OSS-based examples for training and deployment</H3> <P>Azure ML is announcing a new set of YAML-based examples for training and deploying models using popular open-source libraries like PyTorch, LightGBM, FastAI, R, and TensorFlow. All examples leverage open-source logging via the MLFlow library and do not require Azure-specific code inside of the user training script.</P> <P>&nbsp;</P> <P>Examples are tested and validated using GitHub Actions against the latest Azure ML release. Official documentation on docs.microsoft.com leverages these tested snippets to ensure a smooth, working experience for users to get started.</P> <P>&nbsp;</P> <P>You can find the new examples here:<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">azureml-examples/cli at main · Azure/azureml-examples (github.com)</A><SPAN>&nbsp;</SPAN><SPAN>.</SPAN></P> <P>&nbsp;</P> <H3 id="toc-hId--1347471706">ARM REST APIs, templates, and examples</H3> <P>With full ARM support for model training jobs and endpoint creation, ISVs can use Azure ML to create and manage machine learning resources as first-class Azure entities.</P> <P>&nbsp;</P> <P>&nbsp;</P> <H4 id="toc-hId-573960367">Examples using ARM REST APIs</H4> <UL> <LI>Train a model using ARM REST APIs:<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">Train models with REST (preview) - Azure Machine Learning | Microsoft Docs</A></LI> <LI>Deploy a model using ARM REST APIs:<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">azureml-examples/how-to-deploy-rest.sh at main · Azure/azureml-examples (github.com)</A></LI> <LI>ARM template to create a workspace and train a model:<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">azureml-examples/readme.md at arm-deploy-example · Azure/azureml-examples (github.com)</A></LI> </UL> <P>&nbsp;</P> <P>Additional documentation on REST APIs is available here:</P> <UL> <LI><A href="#" target="_blank" rel="noopener noreferrer">Azure Machine Learning REST APIs | Microsoft Docs</A></LI> <LI><A href="#" target="_blank" rel="noopener noreferrer">Jobs - Create Or Update - REST API (Azure Machine Learning) | Microsoft Docs</A></LI> <LI><A href="#" target="_blank" rel="noopener noreferrer">Batch Endpoints - Create Or Update - REST API (Azure Machine Learning) | Microsoft Docs</A></LI> <LI><A href="#" target="_blank" rel="noopener noreferrer">Online Endpoints - Create Or Update - REST API (Azure Machine Learning) | Microsoft Docs</A></LI> </UL> <H2 id="toc-hId--1491659534">&nbsp;</H2> <H2 id="toc-hId-995853299">Summary</H2> <P>In summary, the new Azure ML REST APIs and helps ML teams focus more on the business problem than the underlying infrastructure. It provides a simple developer interface to train, deploy and score models and help in the operational aspects of the end-to-end MLOps lifecycle.</P> <P>&nbsp;</P> <P>Please try our new examples and templates and share your feedback with us. You can use az feedback directly from the new CLI :)</img></P> Sat, 29 May 2021 03:39:32 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-the-new-cli-and-arm-rest-apis-for-azure-machine/ba-p/2393447 JordanE 2021-05-29T03:39:32Z Introducing multilingual support for semantic search on Azure Cognitive Search https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-multilingual-support-for-semantic-search-on-azure/ba-p/2385110 <P>At Microsoft, we are always looking for ways to empower our customers to achieve more by delivering our most advanced AI-enabled services. In March 2021, we launched the preview release of <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-semantic-search-bringing-more-meaningful-results-to/ba-p/2175636" target="_blank" rel="noopener">semantic search</A> on Azure Cognitive Search, which allows our customers’ search engines to retrieve and rank search results based on the semantic meaning of search keywords rather than just their syntactical interpretation. We introduced this functionality by leveraging state-of-the-art <A href="#" target="_blank" rel="noopener">language models</A> that <A href="#" target="_blank" rel="noopener">power</A> Microsoft Bing search scenarios across several languages – a result of the recent advancements in developing large, pretrained transformer-based models as part of our Microsoft <A href="#" target="_blank" rel="noopener">AI at Scale</A> initiative.</P> <P>&nbsp;</P> <P>Today, we are excited to announce that we are extending these capabilities to enable semantic search across multiple languages on Azure Cognitive Search.</P> <P>&nbsp;</P> <H2><FONT size="5">Search scenarios</FONT></H2> <P>Semantic search consists of three scenarios – semantic ranking, captions and answers – and customers can easily enable them via the <A href="#" target="_blank" rel="noopener">REST API</A> or Azure Portal to get semantic search results. The following examples illustrate how these scenarios are being delivered across different languages, where we rank search results based on our semantic ranker, followed by extracting and semantically highlighting the answer to the search query.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Semantic search German language.gif" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283753i5B034EB21F04DF1F/image-size/large?v=v2&amp;px=999" role="button" title="Semantic search German language.gif" alt="Figure 1. Semantic search in German language. English translated query is {area code kyllburg&amp;#125;. Sample index is based on the XGLUE benchmark dataset for cross-lingual understanding." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 1. Semantic search in German language. English translated query is {area code kyllburg}. Sample index is based on the XGLUE benchmark dataset for cross-lingual understanding.</span></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Semantic search French language.gif" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283756i9776FD0EEEF1F703/image-size/large?v=v2&amp;px=999" role="button" title="Semantic search French language.gif" alt="Figure 2. Semantic search in French language. English translated query is {different literary movements&amp;#125;. Sample index is based on the XGLUE benchmark dataset for cross-lingual understanding." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 2. Semantic search in French language. English translated query is {different literary movements}. Sample index is based on the XGLUE benchmark dataset for cross-lingual understanding.</span></span></P> <H2><FONT size="5">Models and evaluations&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</FONT>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</H2> <P>The language models powering semantic search are based on our state-of-the-art Turing multi-language model (<A href="#" target="_blank" rel="noopener">T-ULRv2</A>) that enables search across 100+ languages in a zero-shot fashion. Using global data from Bing, these models have been fine-tuned across various tasks to enable high-quality semantic search features for multiple languages and have been distilled further to optimize for serving real-world online scenarios at a significantly lower cost. Below is a list of the various innovations that are powering semantic search today.</P> <P><A href="#" target="_blank" rel="noopener">UniLM (Unified Language Model pre-training)</A></P> <P><A href="#" target="_blank" rel="noopener">Graph attention networks for machine reading comprehension</A></P> <P><A href="#" target="_blank" rel="noopener">Multi-task deep neural networks for natural language understanding</A></P> <P><A href="#" target="_blank" rel="noopener">MiniLM distillation for online serving in real-life applications</A></P> <P>&nbsp;</P> <P>Since their introduction, the models have been serving Bing search traffic across several markets and languages, delivering high-quality semantic search results to Bing users worldwide. Additionally, we have validated the quality of semantic ranking on Azure Cognitive Search using a variety of cross-lingual datasets – these include academic benchmark datasets (e.g. <A href="#" target="_self">XGL<SPAN>UE</SPAN></A> web page ranking) as well as real-world datasets from services currently powered by Azure Cognitive Search (e.g. Microsoft Docs). Our results showed several points of gain in search relevance metrics (NDCG) over the existing BM25 ranker for various languages such as French, German, Spanish, Italian, Portuguese, Chinese and Japanese. For semantic answers, our evaluations were based on multiple datasets focused on Q&amp;A tasks. Current academic benchmark leaderboards for Q&amp;A scenarios measure accuracy of answer extraction for a given passage. However, our assessments were required to go a step further and consider more real-world intricacies involving multiple steps&nbsp;(see Figure 3) to extract an answer from a search index: (1) documents retrieval from the search index, (2) candidate passage extraction from the given documents, (3) passage ranking across candidate passages, and (4) answer extraction from the most relevant passage. We observed that our model accuracy for French, Italian, Spanish and German languages is equivalent to that of English language.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="swaritgrover_1-1621971280196.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283660i441774A566E41DA3/image-size/large?v=v2&amp;px=999" role="button" title="swaritgrover_1-1621971280196.png" alt="Figure 3. Semantic answer extraction in Azure Cognitive Search." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 3. Semantic answer extraction in Azure Cognitive Search.</span></span></P> <H2><FONT size="5">Get started</FONT></H2> <P>The following table summarizes the set of languages and queryLanguage parameter values that we currently support via the REST API to enable semantic search on Azure Cognitive Search. Note that we have also added speller support for Spanish, French and German languages. For languages marked as “preview”, we encourage you out to try the capability for your search index and give us your feedback. For detailed instructions on how to configure semantic search for your target language, please refer to our <A href="#" target="_blank" rel="noopener">documentation</A>.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="swaritgrover_2-1621971510974.png" style="width: 841px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283661iF88AF52E01205750/image-size/large?v=v2&amp;px=999" role="button" title="swaritgrover_2-1621971510974.png" alt="Table 1. Supported languages for semantic search on Azure Cognitive Search." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Table 1. Supported languages for semantic search on Azure Cognitive Search.</span></span></P> <H2><FONT size="5">Conclusion</FONT></H2> <P>With additional support for new languages, we are very excited to extend access to our state-of-the-art AI-enabled search capabilities to developers and customers worldwide. Please <A href="#" target="_blank" rel="noopener">sign up</A> for our preview to try out semantic search today!</P> <P>&nbsp;</P> <H2>References</H2> <P><A href="#" target="_blank" rel="noopener">https://aka.ms/semanticgetstarted</A></P> <P>&nbsp;</P> Wed, 26 May 2021 13:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-multilingual-support-for-semantic-search-on-azure/ba-p/2385110 swaritgrover 2021-05-26T13:00:00Z Azure Cognitive Search indexers allow you to ingest data from many new data sources https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-cognitive-search-indexers-allow-you-to-ingest-data-from/ba-p/2381988 <P data-unlink="true">An&nbsp;<A href="#" target="_self">indexer</A>&nbsp;in <A href="#" target="_self">Azure Cognitive Search</A> is a crawler that extracts searchable text and metadata from a data source and populates a search index using field-to-field mappings between source data and your index. This approach is sometimes referred to as a 'pull model' because the service pulls data in without you having to write any code that adds data to an index. Indexers also drive the&nbsp;<A href="#" target="_blank" rel="noopener">AI enrichment&nbsp;capabilities</A> of Cognitive Search, integrating external processing of content en route to an index. Previously, indexers mostly just supported Azure data sources like Azure blobs and Azure SQL.</P> <P>&nbsp;</P> <P><STRONG>Today we’re excited to announce the following updates related to data source support!</STRONG></P> <P>&nbsp;</P> <P>New preview indexers</P> <UL> <LI>Amazon Redshift (Powered by Power Query)</LI> <LI>Cosmos DB Gremlin API</LI> <LI>Elasticsearch (Powered by Power Query)</LI> <LI>MySQL</LI> <LI>PostgreSQL (Powered by Power Query)</LI> <LI>Salesforce Objects (Powered by Power Query)</LI> <LI>Salesforce Reports (Powered by Power Query)</LI> <LI>SharePoint Online</LI> <LI>Smartsheet (Powered by Power Query)</LI> <LI>Snowflake (Powered by Power Query)</LI> </UL> <P>GA indexers</P> <UL> <LI>Azure Data Lake Storage Gen2</LI> </UL> <P>&nbsp;</P> <H1>Power Query Connectors</H1> <P data-unlink="true">Power Query&nbsp;is a data transformation and data preparation engine with the ability to pull data from many different data sources. Power Query connectors are used in products like Power BI and Excel. Azure Cognitive Search has <A href="#" target="_self">added support for select Power Query data connectors</A> so that you can pull data from more data sources using the familiar indexer pipeline.</P> <P>&nbsp;</P> <P>You can use the select Power Query connectors just like you would use any other indexer. The Power Query connectors integrated into Azure Cognitive Search support change tracking, skillsets, field mappings, and many of the other features that indexers provide. They also support transformations.</P> <P>&nbsp;</P> <P>These optional transformations can be used to manipulate your data before pulling it into an Azure Cognitive Search index. They can be as simple as removing a column or filtering rows or as advanced as adding your own M script.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Mark_Heffernan_0-1621889796443.png" style="width: 932px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283222i5804E8CCD321A28D/image-dimensions/932x436?v=v2" width="932" height="436" role="button" title="Mark_Heffernan_0-1621889796443.png" alt="Mark_Heffernan_0-1621889796443.png" /></span></P> <P>&nbsp;</P> <P>To learn more about how to pull data from your data source using one of the new Power Query indexers, view the following tutorial:</P> <P>&nbsp;</P> <P><EM><LI-VIDEO vid="https://www.youtube.com/watch?v=uy-l4xFX1EE" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/uy-l4xFX1EE/hqdefault.jpg" external="url"></LI-VIDEO><BR /></EM></P> <P>&nbsp;</P> <H1>SharePoint Online Indexer</H1> <P>The <A href="#" target="_self">SharePoint Online indexer</A> allows you to pull content from one or more SharePoint Online document libraries and index that content into an Azure Cognitive Search index. It supports many different file formats including the Office file formats. It also supports change detection that will by default identify which documents in your document library have been updated, added, or deleted. This means that after the initial ingestion of content from your document library, the indexer will only process content that has been updated, added, or deleted from your document library.</P> <P>&nbsp;</P> <P>To learn more about how to pull data from your SharePoint Online document library, view the following tutorial:</P> <P>&nbsp;</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=QmG65Vgl0JI" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/QmG65Vgl0JI/hqdefault.jpg" external="url"></LI-VIDEO></P> <P><EM>&nbsp;</EM></P> <H1>Getting started</H1> <P>To get started with the new preview indexers, sign up using the below form:</P> <P><A href="#" target="_blank" rel="noopener">https://aka.ms/azure-cognitive-search/indexer-preview</A></P> <P>&nbsp;</P> <P>For more information, see our documentation at:</P> <UL> <LI>Power Query connectors: <A href="#" target="_blank" rel="noopener">https://aka.ms/azs/powerqueryconnectors</A></LI> <LI>SharePoint Online indexer: <A href="#" target="_blank" rel="noopener">https://aka.ms/azs/sharepointindexer</A></LI> <LI>Cosmos DB Gremlin API: <A href="#" target="_blank" rel="noopener">https://aka.ms/azs/cosmosdbgremlinindexer</A></LI> <LI>MySQL indexer: <A href="#" target="_blank" rel="noopener">https://aka.ms/azs/mysqlindexer</A></LI> <LI>Azure Data Lake Storage Gen2 indexer: <A href="#" target="_blank" rel="noopener">https://aka.ms/azs/adlsgen2indexer</A></LI> </UL> <P>&nbsp;</P> Wed, 26 May 2021 23:27:32 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-cognitive-search-indexers-allow-you-to-ingest-data-from/ba-p/2381988 Mark_Heffernan 2021-05-26T23:27:32Z Announcing Enterprise Assistant Bot Template and Conversational UX Guide https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-enterprise-assistant-bot-template-and-conversational/ba-p/2379357 <P>Many organizations are looking to provide conversational experiences across a range of conversational canvases for their employees, across all industries and countries, especially with the increase of remote working during the pandemic. To help customers accelerate building such enterprise assistants, with a high-quality conversational experience, we are announcing the <A href="#" target="_blank" rel="noopener">Enterprise Assistant Bot Template</A> in <A href="#" target="_blank" rel="noopener">Bot Framework Composer</A> as part of <A href="#" target="_blank" rel="noopener">Microsoft Conversational AI’s release at Microsoft Build Conference</A> this week.</P> <P>&nbsp;</P> <P>This template&nbsp;provides a starting point for those interested in creating a virtual assistant for common enterprise scenarios. It demonstrates a common bot building architecture and high-quality pre-built conversational experiences through a root bot connected to multiple skills. This pattern allows for the consolidation of multiple bots across the organization into a centralized solution where a root bot finds the correct bot to handle a query, accelerating user productivity. With <A href="#" target="_blank" rel="noopener">Bot Framework Composer</A>, a powerful visual authoring tool for building bots and virtual assistants, you have the flexibility to personalize the assistant to reflect the values, brand, and tone of your company and extend with code where needed.</P> <P>&nbsp;</P> <P>The template is designed to be ready out-of-the-box with support for common employee channels such as Microsoft Teams and Web Chat. It includes features from&nbsp;Core Assistant Bot Template&nbsp;that is newly released in Bot Framework Composer and two pre-built skills (<A href="#" target="_blank" rel="noopener">Enterprise Calendar Bot</A>&nbsp;and&nbsp;<A href="#" target="_blank" rel="noopener">Enterprise People Bot</A>), with more to come. Here is an overview of the scenarios included in the Enterprise Assistant Bot Template:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ElaineChang_0-1621794798926.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282998iE42AC822EF8FC3FC/image-size/large?v=v2&amp;px=999" role="button" title="ElaineChang_0-1621794798926.png" alt="ElaineChang_0-1621794798926.png" /></span></P> <P>&nbsp;</P> <P>You can learn more about the Enterprise Assistant Bot Template <A href="#" target="_blank" rel="noopener">here</A> including its targeted conversational experiences, design principles, example interactions, developer experience, data usage and storage. We have also created a <A href="#" target="_blank" rel="noopener">Tutorial</A> to help you get started.</P> <P>&nbsp;</P> <P>All of these investments were informed by the popularity of our Virtual Assistant solution accelerator along with traction around Bot Framework Composer. This led us to infuse all of the Virtual Assistant capabilities into Composer and the Bot Framework SDK enabling these capabilities to be leveraged far more broadly and benefit from new tooling.</P> <P>&nbsp;</P> <P>Here is a preview of an Enterprise Assistant in Microsoft Teams leveraging this template:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Enterprise Assistant Template.gif" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283000iE6EF0B0DA5C9A4F0/image-size/large?v=v2&amp;px=999" role="button" title="Enterprise Assistant Template.gif" alt="Enterprise Assistant Template.gif" /></span></P> <P> </P> <H2><STRONG>Conversational User Experience Guide</STRONG></H2> <P>The Enterprise Assistant Bot Template was designed based on the thoughtful Conversational User Experience (CUX) best practices we have learned based on years of experiences building and deploying CUX worldwide for a variety of bots and virtual assistants. You can check out our newly updated CUX Guide <A href="#" target="_blank" rel="noopener">here</A>. The guide offers insights to help you craft an effective, responsive, inclusive, and delightful experience that tackles a variety of business scenarios.</P> <P>&nbsp;</P> <H2><STRONG>Share your scenario and feedback</STRONG></H2> <P>We would love to hear your scenarios and feedback using this template and CUX guide, including additional pre-built skills and capabilities you would like to see included in the future. Please <A href="#" target="_blank" rel="noopener">submit a feature request</A>&nbsp;with your detailed enterprise scenario for future consideration.</P> <P>&nbsp;</P> <H2><STRONG>Sign up for 1:1 Consultation</STRONG></H2> <P>As part of the <A href="#" target="_blank" rel="noopener">Microsoft Build Conference</A> (May 25-27) this week, we are offering free 1:1 customer consultation sessions with experts in our Microsoft Conversational AI product engineering team. After you register for the conference, you can sign up for these consultations <A href="#" target="_blank" rel="noopener">here</A>, choose Azure Conversational AI, and let us know more about your scenarios and challenges. If you have additional questions about Enterprise Assistant Bot Template or any other Conversational AI questions, we would be happy to help you there.</P> Tue, 25 May 2021 19:58:12 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-enterprise-assistant-bot-template-and-conversational/ba-p/2379357 ElaineChang 2021-05-25T19:58:12Z Build 2021 – Azure Cognitive Services – Speech Updates https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-2021-azure-cognitive-services-speech-updates/ba-p/2384260 <P>Now more than ever, developers are expected to design and build apps that interact naturally with end-users. At Build, we’ve made several improvements to the Speech service that will make it easier to build rich, voice-enabled experiences that address a variety of needs with speech-to-text and text-to-speech.</P> <P>&nbsp;</P> <H2>Improving the Speech Studio experience</H2> <P><A href="#" target="_blank" rel="noopener">Speech Studio</A> is a UI-based experience that allows developers to explore the Speech service with no-code tools and enables them to customize various aspects of the Speech service in a guided experience.</P> <P>Improvements to Speech Studio include:</P> <UL> <LI>Modern UX with the latest unified Azure design template.</LI> <LI>Convenient no-code tools for quickly onboarding to the Speech service. Try out Real-time Speech-to-text to transcribe your audio into text, the Voice Gallery to explore our natural sounding Text-to-speech voices and Pronunciation Assessment to evaluate a user’s fluency and pronunciation.</LI> <LI>Responsive layouts, faster page loading, and improved login experience with latest Azure-AD authentication.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="HeikoRa_0-1621956572430.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283577iEDB5088641A51638/image-size/large?v=v2&amp;px=999" role="button" title="HeikoRa_0-1621956572430.png" alt="HeikoRa_0-1621956572430.png" /></span></P> <P>&nbsp;</P> <H2>Text-to-speech adds more languages, continuous voice quality improvement and more</H2> <P>At the Microsoft Build conference, Microsoft announced the extension of Neural TTS to support 10 more languages and 32 new voices. With this update, Azure neural TTS now provides developers a rich choice of more than 250 voices available across 70+ languages and variances in 21+ Azure <A href="#" target="_blank" rel="noopener">regions</A>. The 10 newly released languages (locales) are: English (Hongkong), English (New Zealand), English (Singapore), English (South Africa), Spanish (Argentina), Spanish (Colombia), Spanish (US), Gujarati (India), Marathi (India) and Swahili (Kenya).</P> <P>In addition, 11 new voices are added to the US English portfolio, enabling developers to create even more appealing read-aloud and conversational experiences in different voices. These new voices are distributed across different age groups, including a kid voice, and with different voice timbre, to meet customers’ requirements on voice variety.&nbsp; Together with Aria, Jenny, and Guy, we now offer 14 neural TTS voices in US English).</P> <P>Besides that, we also improve the question tones for the following voices: Mia and Ryan in English (United Kingdom), Denise and Henri in French (France), Isabella in Italian (Italy), Conrad in German (Germany), Alvaro in Spanish (Spain), Dalia and Jorge in Spanish (Mexico).</P> <P><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981" target="_blank" rel="noopener">See more details in the TTS blog</A>.</P> <P>&nbsp;</P> <H2>Cross-lingual adaptation enables the same voice to speak in multiple languages</H2> <P>To support the growing need for a single persona to speak multiple languages in scenarios such as localization and translation, a neural voice that speaks multiple languages is brought out in public preview. This new Jenny multilingual voice, with US English as the primary/default language, can speak another 13 secondary languages, each fluently: German (Germany), English (Australia), English (Canada), English (Canada), Spanish (Spain), Spanish (Mexico), French (Canada), French (France), Italian (Italy), Japanese (Japan), Korean (Korea), Portuguese (Brazil), Chinese (Mandarin, Simplified).</P> <P>With this new voice, developers can easily enable their applications to speak multiple languages, without changing the persona. Learn how to use the multi-lingual capability of the voice&nbsp;<A href="#" target="_blank" rel="noopener">with SSML</A>.</P> <P>What’s more, we have also brought this powerful feature to&nbsp;<A href="#" target="_blank" rel="noopener">Custom Neural Voice</A>, allowing customers to build a natural-sounding one-of-a-kind voice that speaks different languages. Custom Neural Voice has enabled a number of global companies such as <A href="#" target="_blank" rel="noopener">&nbsp;BBC</A>,&nbsp;<A href="#" target="_blank" rel="noopener">Swisscom,&nbsp;</A>&nbsp;<A href="#" target="_blank" rel="noopener">AT&amp;T</A>&nbsp;and&nbsp;<A href="#" target="_blank" rel="noopener">Duolingo </A>to build realistic voices that resonate with their brands.</P> <P>This cross-lingual adaptation feature (preview) brings new opportunities to light up more compelling scenarios. For example, developers can enable an English virtual assistant’s voice to speak German fluently so the bot can read movie titles in German; or, create a game with the same non-player characters speaking different languages to users from different geographies.</P> <P>&nbsp;</P> <H2>Speech-to-text adds new languages, continuous language detection and more</H2> <P>Speech-to-text capability now supports <A href="#" target="_blank" rel="noopener">95 languages and variants</A>. At Build, we announced 9 new languages (locales): &nbsp;English (Ghana), English (Kenya), English (Tanzania), Filipino (Philippines), French (Switzerland), German (Austria), Indonesian (Indonesia), Malay (Malaysia), Vietnamese (Vietnam).</P> <P>This enables developers to provide solutions to a global audience of users. A great example is Twitter, who are using Speech service to generate captions for live audio conversations on <A href="#" target="_self">Twitter Spaces</A>, making its platform more accessible to all its users.</P> <P>&nbsp;</P> <P><STRONG>Continuous language detection </STRONG></P> <P>Speech transcription is incredibly accurate and useful for scenarios like call center transcription and live audio captioning. However, in working with some of our customers, we noticed that in some cases, they have multilingual employees and customers who might switch between different languages, often mid-sentence. In our increasingly globalized world, the ability to support multilingual scenarios becomes more essential by the day- whether in conferences, on social media, or in call center transcripts.</P> <P>With continuous language detection, which is in Preview, Speech-to-text can now support recognition in multiple languages. This removes the manual effort from developers to have to tag and split the audio to transcribe it in the correct language – it all becomes automatic. Using this, customers can send audio files or steams, each containing a different, or possibly more than one, language, and the service can process it once and return the resulting transcription back. To learn how to get started, <A href="#" target="_blank" rel="noopener">visit our documentation page</A>.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="LanguageID.gif" style="width: 853px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283589i6DBE3A398EA0E66F/image-size/large?v=v2&amp;px=999" role="button" title="LanguageID.gif" alt="Depiction of language detection process" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Depiction of language detection process</span></span></P> <P><STRONG>Pronunciation assessment</STRONG></P> <P>An important element of language learning is being able to accurately pronounce words. The Speech service now supports <A href="#" target="_blank" rel="noopener">pronunciation assessment</A> to <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/speech-service-update-pronunciation-assessment-is-generally/ba-p/2505501" target="_blank" rel="noopener">empower language learners and educators more</A>. Pronunciation assessment is generally available in US English. Other <A href="#" target="_blank" rel="noopener">Speech-to-text languages</A> are available in preview.</P> <P>Pronunciation assessment is used in <A href="#" target="_blank" rel="noopener">PowerPoint coach</A> to advise presenters on the correct pronunciation of spoken words throughout their rehearsal. <A href="#" target="_blank" rel="noopener">Teams Reading Progress</A> <STRONG>&nbsp;</STRONG>also uses pronunciation assessment to help students improve reading fluency, after the pandemic negatively affected students’ reading ability. It can be used inside and outside of the classroom to save teachers time and improve learning outcomes for students.</P> <P><A href="#" target="_blank" rel="noopener">BYJU</A> also uses pronunciation assessment to build the <A href="#" target="_blank" rel="noopener">English Language App (ELA)</A> to target geographies where English is used as the secondary language and is considered an essential skill to acquire. The app combines comprehensive lessons with state-of-the-art speech technology to help children learn English with a personalized lesson path.</P> <P>Pearson’s <A href="#" target="_blank" rel="noopener">Longman English Plus</A> uses pronunciation assessment to empower both students and teachers to improve productivity in language learning, with a personalized placement test feature and learning material recommendations for different levels of students. As the world’s leading learning company, Pearson enables tens of millions of learners per year to maximize their success. Key technologies from Microsoft used in Longman English Plus are <U>pronunciation assessment</U>, <A href="#" target="_blank" rel="noopener">neural text-to-speech</A> and <A href="#" target="_blank" rel="noopener">natural language processing</A>.</P> <P>&nbsp;</P> <P><STRONG>Custom Keyword</STRONG></P> <P>Custom Keyword, now generally available, allows you to generate keyword recognition models that execute at the edge by specifying any word or short phrase. The models can be used to add voice activation to your product, enabling your end-users to interact completely hands-free. What’s new is the ability to create Advanced models – models with increased accuracy without you having to provide any training data. Custom Keyword fully handles data generation and training. To learn how to get started, <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/add-voice-activation-to-your-product-with-custom-keyword/ba-p/2332961" target="_blank" rel="noopener">read this walkthrough</A>.</P> <P>&nbsp;</P> <P><STRONG>Speech SDK updates</STRONG></P> <P>Here are the highlights of the May release of the Speech SDK 1.17.0:</P> <UL> <LI>Smaller footprint - we continue to decrease the memory and disk footprint of the Speech SDK and its components and have decreased the footprint by over 30% over the last few releases.</LI> <LI>The SDK now supports the language detection feature mentioned above in C++ and C#. You can easily recognize what language is being spoken either at the beginning of a conversation, or throughout a conversation.</LI> <LI>We are always broadening the scope of platforms on which you can develop speech enabled applications. We just added the ability to develop mixed reality and gaming applications using Unity on macOS.</LI> <LI>We always strive to meet developers where they are, both on their platforms, and in their preferred programming language. We just added support for text-to-speech support to our Go programming language API. This is in addition to speech recognition feature we’ve supported for Go since 2020.</LI> </UL> <P>See the <A href="#" target="_blank" rel="noopener">Speech SDK 1.17.0 release notes</A> for more details. If you’d like us to support additional features for your use case, we are always listening! Please find us on <A href="#" target="_blank" rel="noopener">GitHub</A> and drop a question! We will get back to you quickly and will do what we can to support you in developing for your use case!</P> <P>Have fun developing awesome speech enabled solutions on the <A href="#" target="_blank" rel="noopener">Azure Speech service</A>!</P> <P>Next Steps:</P> <OL> <LI><A href="#" target="_blank" rel="noopener">Visit the Speech product page</A> to learn about key scenarios &amp; docs to get started</LI> <LI><A href="#" target="_blank" rel="noopener">Try out Speech Studio</A> for a UI-based building experience</LI> <LI><A href="#" target="_blank" rel="noopener">Get started on your 30-day learning journey</A> for AI development</LI> </OL> Tue, 13 Jul 2021 14:57:43 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-2021-azure-cognitive-services-speech-updates/ba-p/2384260 HeikoRa 2021-07-13T14:57:43Z Azure Text-to-Speech updates at //Build 2021 https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981 <P>By: Garfield He, Melinda Ma, Melissa Ma, Bohan Li, Qinying Liao, Sheng Zhao, Yueying Liu</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Text to Speech</A> (TTS),&nbsp;part of Speech in Azure Cognitive Services, enables developers to convert text to lifelike speech for more natural interfaces with a rich choice of prebuilt voices and powerful customization capabilities. At the //Build 2021 conference, we are excited to announce several new features and improvements to TTS that address a variety of needs from customers globally.</P> <P>&nbsp;</P> <H1>Cross-lingual adaptation (preview) enables the same voice to speak multiple languages</H1> <P>&nbsp;</P> <P>Now more than ever, developers are expected to build voice-enabled applications that can reach a global audience. With the same voice persona across languages, organizations can keep their brand image more consistent. To support the growing need for a single voice to speak multiple languages, particularly in scenarios such as localization and translation, a multi-lingual neural TTS voice is brought out in public preview.</P> <P>&nbsp;</P> <P>This new Jenny Multilingual voice (preview), with US English as the primary/default language, can speak 13 secondary languages, each at the fluent level: German (Germany), English (Australia), English (Canada), English (Canada), Spanish (Spain), Spanish (Mexico), French (Canada), French (France), Italian (Italy), Japanese (Japan), Korean (Korea), Portuguese (Brazil), Chinese (Mandarin, Simplified).</P> <P>&nbsp;</P> <P>Hear how Jenny Multilingual speaks different languages in the samples below:</P> <P>&nbsp;</P> <TABLE style="width: 850px;" width="850"> <TBODY> <TR> <TD width="72px" height="30px"> <P><STRONG>Locale</STRONG></P> </TD> <TD width="113px" height="30px"> <P><STRONG>Language</STRONG></P> </TD> <TD width="160px" height="30px"> <P><STRONG>Sample script</STRONG></P> </TD> <TD width="504px" height="30px"> <P><STRONG>Audio</STRONG></P> </TD> </TR> <TR> <TD width="72px"> <P>en-US</P> </TD> <TD width="113px"> <P>English (United States) – <STRONG>the default language</STRONG></P> </TD> <TD width="160px"> <P>We look forward to working with you!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/en-us.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>de-DE</P> </TD> <TD width="113px"> <P>German (Germany)</P> </TD> <TD width="160px"> <P>Wir freuen uns auf die Zusammenarbeit mit Ihnen!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/de-de.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>en-AU</P> </TD> <TD width="113px"> <P>English (Australia)</P> </TD> <TD width="160px"> <P>We look forward to working with you!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/en-au.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>en-CA</P> </TD> <TD width="113px"> <P>English (Canada)</P> </TD> <TD width="160px"> <P>We look forward to working with you!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/en-ca.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>en-GB</P> </TD> <TD width="113px"> <P>English (United Kingdom)</P> </TD> <TD width="160px"> <P>We look forward to working with you!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/en-gb.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>es-ES</P> </TD> <TD width="113px"> <P>Spanish (Spain)</P> </TD> <TD width="160px"> <P>¡Esperamos trabajar con usted!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/es-es.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>es-MX</P> </TD> <TD width="113px"> <P>Spanish (Mexico)</P> </TD> <TD width="160px"> <P>¡Esperamos trabajar con usted!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/es-mx.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>fr-CA</P> </TD> <TD width="113px"> <P>French (Canada)</P> </TD> <TD width="160px"> <P>Nous avons hâte de travailler avec vous !</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/fr-ca.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>fr-FR</P> </TD> <TD width="113px"> <P>French (France)</P> </TD> <TD width="160px"> <P>Nous avons hâte de travailler avec vous !</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/fr-fr.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>it-IT</P> </TD> <TD width="113px"> <P>Italian (Italy)</P> </TD> <TD width="160px"> <P>Non vediamo l'ora di lavorare con voi!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/it-it.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>ja-JP</P> </TD> <TD width="113px"> <P>Japanese (Japan)</P> </TD> <TD width="160px"> <P>私たちはあなたと一緒に働くことを楽しみにしています!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/ja-jp.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>ko-KR</P> </TD> <TD width="113px"> <P>Korean (Korea)</P> </TD> <TD width="160px"> <P>우리는 당신과 함께 협정 하는 거를 기대합니다.</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/ko-kr.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>pt-BR</P> </TD> <TD width="113px"> <P>Portuguese (Brazil)</P> </TD> <TD width="160px"> <P>Será um prazer trabalhar com você!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/pt-br.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="72px"> <P>zh-CN</P> </TD> <TD width="113px"> <P>Chinese (Mandarin, Simplified)</P> </TD> <TD width="160px"> <P>我们期待与您合作!</P> </TD> <TD width="504px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1Locale/zh-cn.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>With this new voice, developers can easily enable their applications to speak multiple languages, without changing the persona. Learn how to use the multi-lingual capability of the voice <A href="#" target="_blank" rel="noopener">with SSML</A>.</P> <P>&nbsp;</P> <P>What’s more, this powerful feature is also available in public preview on&nbsp;<A href="#" target="_blank" rel="noopener">Custom Neural Voice</A>, allowing customers to build a natural-sounding one-of-a-kind voice that speaks different languages. Custom Neural Voice has enabled a number of global companies to build realistic voices that resonate with their brands. For example<A href="#" target="_blank" rel="noopener">, BBC</A>, <A href="#" target="_blank" rel="noopener">Swisscom, </A>&nbsp;<A href="#" target="_blank" rel="noopener">AT&amp;T</A> and <A href="#" target="_blank" rel="noopener">Duolingo</A>.</P> <P>&nbsp;</P> <P>This cross-lingual adaptation feature (preview) brings new opportunities to light up more compelling scenarios. For example, developers can enable an English virtual assistant’s voice to speak German fluently so the bot can read movie titles in German; or, create a game with the same non-player characters speaking different languages to users from different geographies. In this demo, <A href="#" target="_blank" rel="noopener">Julia White</A> presents a keynote in a mixed-reality world, using her virtual voice in Japanese, trained from English data.</P> <P>&nbsp;</P> <P>Below samples show a custom neural voice in different languages, trained from the same speaker’s voice data in UK English.</P> <P>&nbsp;</P> <P><STRONG>Human recording as the training data (UK English):&nbsp;</STRONG></P> <TABLE style="width: 850px;" width="736px"> <TBODY> <TR> <TD width="149px"> <P><STRONG>Data language </STRONG></P> </TD> <TD width="293px"> <P><STRONG>Recording sample</STRONG></P> </TD> <TD width="294px"><STRONG>Audio</STRONG></TD> </TR> <TR> <TD width="149px"> <P>English (United Kingdom)</P> </TD> <TD width="293px"> <P>Is it a copy, or do you need it back?</P> </TD> <TD width="294px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/engbrecording.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P><STRONG>TTS output samples in other 13 other languages: &nbsp;</STRONG></P> <TABLE style="width: 850px;" width="850"> <TBODY> <TR> <TD width="59px"> <P><STRONG>TTS locale</STRONG></P> </TD> <TD width="92px"> <P><STRONG>Language </STRONG></P> </TD> <TD width="185px"> <P><STRONG>TTS sample</STRONG></P> </TD> <TD width="202px"> <P><STRONG>Audio</STRONG></P> </TD> </TR> <TR> <TD width="59px"> <P>de-DE</P> </TD> <TD width="92px"> <P>German (Germany)</P> </TD> <TD width="185px"> <P>Zwei der vier Eingänge sind nun geöffnet.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/dede.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>en-AU</P> </TD> <TD width="92px"> <P>English (Australia)</P> </TD> <TD width="185px"> <P>I've seen this movie already.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/enau.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>en-CA</P> </TD> <TD width="92px"> <P>English (Canada)</P> </TD> <TD width="185px"> <P>I am looking forward to the exciting things.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/enca.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>en-GB</P> </TD> <TD width="92px"> <P>English (United Kingdom)</P> </TD> <TD width="185px"> <P>The docking was a fully automated process.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/engb.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>en-US</P> </TD> <TD width="92px"> <P>English (United States)</P> </TD> <TD width="185px"> <P>I've seen this movie already.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/enus.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>es-ES</P> </TD> <TD width="92px"> <P>Spanish (Spain)</P> </TD> <TD width="185px"> <P>El acoplamiento era un proceso totalmente automatizado.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/eses.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>es-MX</P> </TD> <TD width="92px"> <P>Spanish (Mexico)</P> </TD> <TD width="185px"> <P>Estoy deseando que lleguen las cosas emocionantes.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/esmx.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>fr-CA</P> </TD> <TD width="92px"> <P>French (Canada)</P> </TD> <TD width="185px"> <P>Deux des quatre entrées sont maintenant ouvertes.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/frca.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>fr-FR</P> </TD> <TD width="92px"> <P>French (France)</P> </TD> <TD width="185px"> <P>Deux des quatre entrées sont maintenant ouvertes.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/frfr.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>it-IT</P> </TD> <TD width="92px"> <P>Italian (Italy)</P> </TD> <TD width="185px"> <P>Due dei quattro ingressi sono ora aperti.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/itit.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>ja-JP</P> </TD> <TD width="92px"> <P>Japanese (Japan)</P> </TD> <TD width="185px"> <P>私はすでにこの映画を見てきました。</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/jajp.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>ko-KR</P> </TD> <TD width="92px"> <P>Korean (Korea)</P> </TD> <TD width="185px"> <P>네개의 입구 중 두개가 이제 열려 있습니다.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/kokr.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>pt-BR</P> </TD> <TD width="92px"> <P>Portuguese (Brazil)</P> </TD> <TD width="185px"> <P>Eu já vi esse filme.</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/ptbr.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>zh-CN</P> </TD> <TD width="92px"> <P>Chinese (Mandarin, Simplified)</P> </TD> <TD width="185px"> <P>我已经看过这部电影了。</P> </TD> <TD width="202px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/bohli/May2021Tier1LocaleAchie/zhcn.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>The cross-lingual adaptation feature (preview) on Custom Neural Voice, is available in the latest <A href="#" target="_blank" rel="noopener">Speech Studio</A><SPAN>, </SPAN>&nbsp;a UI-based experience that allows developers to explore the Speech service with no-code tools and enables them to customize various aspects of the Speech service in a guided experience.</P> <P>&nbsp;</P> <P>As part of Microsoft’s commitment to responsible AI,&nbsp;Custom Neural Voice is available with <A href="#" target="_blank" rel="noopener">limited access</A>. Check more details on how to apply and use Custom Neural Voice <A href="#" target="_blank" rel="noopener">in this video</A>.</P> <P>&nbsp;</P> <H1>Neural text-to-speech supports 10 more languages</H1> <P>&nbsp;</P> <P>We are glad to announce that neural TTS is extended to support 10 more languages and 32 new voices. With this update, Azure neural TTS now provides developers with more than 250 voices available across 70+ languages and variances. <A href="#" target="_blank" rel="noopener">Check the full languages and voices</A>.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="GarfieldHe_2-1621932439143.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283436i741059736B22F3B2/image-size/large?v=v2&amp;px=999" role="button" title="GarfieldHe_2-1621932439143.png" alt="GarfieldHe_2-1621932439143.png" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="GarfieldHe_3-1621932575567.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283439i08A139E36904C404/image-size/large?v=v2&amp;px=999" role="button" title="GarfieldHe_3-1621932575567.png" alt="GarfieldHe_3-1621932575567.png" /></span></P> <P>&nbsp;</P> <P>The 10 languages newly released are: `en-HK` English (Hongkong), `en-NZ` English (New Zealand), `en-SG` English (Singapore), `en-ZA` English (South Africa), `es-AR` Spanish (Argentina), `es-CO` Spanish (Columbia), `es-US` Spanish (US), `gu-IN` Gujarati (India), `mr-IN` Marathi (India) and `sw-KE` Swahili (Kenya).</P> <P>&nbsp;</P> <TABLE style="width: 850px;" width="850"> <TBODY> <TR> <TD width="38"> <P><STRONG>Locale</STRONG></P> </TD> <TD width="55"> <P><STRONG>Language </STRONG></P> </TD> <TD width="39"> <P><STRONG>Gender</STRONG></P> </TD> <TD width="45"> <P><STRONG>Voice</STRONG></P> </TD> <TD width="122"> <P><STRONG>Sample script</STRONG></P> </TD> <TD width="284"> <P><STRONG>Audio</STRONG></P> </TD> </TR> <TR> <TD width="38"> <P>en-HK</P> </TD> <TD width="55"> <P>English (Hongkong)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Sam</P> </TD> <TD width="122"> <P>The time is 12:05 PM.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/en-HK-Sam-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>en-HK</P> </TD> <TD width="55"> <P>English (Hongkong)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Yan</P> </TD> <TD width="122"> <P>We discussed buying a motorhome last night.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/en-HK-Yan-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>en-NZ</P> </TD> <TD width="55"> <P>English (New Zealand)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Mitchell</P> </TD> <TD width="122"> <P>The development site is situated within a new 12 kilometre touristic zone.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/en-NZ-Mitchell-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>en-NZ</P> </TD> <TD width="55"> <P>English (New Zealand)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Molly</P> </TD> <TD width="122"> <P>You need to use about 10 grammes of sugar.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/en-NZ-Molly-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>en-SG</P> </TD> <TD width="55"> <P>English (Singapore)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Wayne</P> </TD> <TD width="122"> <P>How long does it take to reheat pot roast?</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/en-SG-Wayne-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>en-SG</P> </TD> <TD width="55"> <P>English (Singapore)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Luna</P> </TD> <TD width="122"> <P>For the two friends, this was their first mission together.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/en-SG-Luna-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>en-ZA</P> </TD> <TD width="55"> <P>English (South Africa)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Leah</P> </TD> <TD width="122"> <P>We have to be there at 6 in the morning.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/en-ZA-Leah-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>en-ZA</P> </TD> <TD width="55"> <P>English (South Africa)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Luke</P> </TD> <TD width="122"> <P>The role of business has never been as important as it is today.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/en-ZA-Luke-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>es-AR</P> </TD> <TD width="55"> <P>Spanish (Argentina)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Tomas</P> </TD> <TD width="122"> <P>Estoy leyendo un blog de viajes.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/es-AR-Tomas-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>es-AR</P> </TD> <TD width="55"> <P>Spanish (Argentina)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Elena</P> </TD> <TD width="122"> <P>El fin de amar es sentirse más vivo.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/es-AR-Elena-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>es-CO</P> </TD> <TD width="55"> <P>Spanish (Columbia)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Gonzalo</P> </TD> <TD width="122"> <P>Hoy me voy de rumba con mis compañeros.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/es-CO-Gonzalo-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>es-CO</P> </TD> <TD width="55"> <P>Spanish (Columbia)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Salome</P> </TD> <TD width="122"> <P>¿Usted conoce a la profesora del salón 101?</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/es-CO-Salome-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>es-US</P> </TD> <TD width="55"> <P>Spanish (US)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Alonso</P> </TD> <TD width="122"> <P>Vamos por unos tequilas mi hermano.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/es-US-Alonso-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>es-US</P> </TD> <TD width="55"> <P>Spanish (US)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Paloma</P> </TD> <TD width="122"> <P>Quiero comer frijoles con bistec.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/es-US-Paloma-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>gu-IN</P> </TD> <TD width="55"> <P>Gujarati (India)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Dhwani</P> </TD> <TD width="122"> <P>ગુજરાતના સૌરાષ્ટ્ર વિસ્તારમાં ગીરનું જંગલ આવેલું છે, જે એશીયાઇ સિંહો માટે પ્રખ્યાત છે.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/gu-IN-Dhwani-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>gu-IN</P> </TD> <TD width="55"> <P>Gujarati (India)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Niranjan</P> </TD> <TD width="122"> <P>ગુજરાત ભારતના પશ્ચિમ તટે આવેલું રાજ્ય છે અને તે પશ્ચિમે અરબી સમુદ્રથી ઘેરાયેલું છે.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/gu-IN-Niranjan-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>mr-IN</P> </TD> <TD width="55"> <P>Marathi (India)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Aarohi</P> </TD> <TD width="122"> <P>गडचिरोली जिल्हा महाराष्ट्र राज्याच्या उत्तर-पूर्व दिशेला वसलेला असून, तेलंगणा आणि छत्तीसगड राज्याच्या सीमेला लागून आहे.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/mr-IN-Aarohi-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>mr-IN</P> </TD> <TD width="55"> <P>Marathi (India)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Manohar</P> </TD> <TD width="122"> <P>गोदावरी नदीची गणना भारतातील प्रमुख नद्यांमध्ये केली जाते. या नदीला दक्षिण गंगा असे ही म्हंटले जाते.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/mr-IN-Manohar-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>sw-KE</P> </TD> <TD width="55"> <P>Swahili (Kenya)</P> </TD> <TD width="39"> <P>Female</P> </TD> <TD width="45"> <P>Zuri</P> </TD> <TD width="122"> <P>Usiwe na wasiwasi; nitakuwa pamoja nawe siku zote.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/sw-KE-Zuri-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="38"> <P>sw-KE</P> </TD> <TD width="55"> <P>Swahili (Kenya)</P> </TD> <TD width="39"> <P>Male</P> </TD> <TD width="45"> <P>Rafiki</P> </TD> <TD width="122"> <P>Starehe ni mojawapo ya shule zinazofanya vyema zaidi barani Afrika.</P> </TD> <TD width="284"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20release/sw-KE-Rafiki-General-Audio.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>You can also check out these voices in <A href="#" target="_blank" rel="noopener">our demo on Azure</A>&nbsp;or through the&nbsp;<A href="#" target="_blank" rel="noopener">Audio Content Creation tool</A> with your own text.</P> <P>&nbsp;</P> <H1>More neural voices are available in English</H1> <P>&nbsp;</P> <P>Different voice characteristics are often expected in different use cases. For example, when creating a customer service bot, developers may prefer a voice that sounds professional, mature, and experienced. While building an app that reads stories to kids, developers may want to use a kid voice, so it better resonate with the audience.</P> <P>&nbsp;</P> <P>Here we introduce 11 new neural voices in public preview that are recently added to the US English portfolio, enabling developers to create even more appealing read-aloud and conversational experiences in different voices. These new voices are distributed across different age groups, including a kid voice, and with different voice timbre, to meet customers’ requirements on the voice variety. Together with Aria, Jenny, and Guy, we now offer 14 neural TTS voices in US English.</P> <P>&nbsp;</P> <TABLE style="width: 850px;" width="850"> <TBODY> <TR> <TD width="53"> <P><STRONG>Gender</STRONG></P> </TD> <TD width="75"> <P><STRONG>Voice</STRONG></P> </TD> <TD width="256"> <P><STRONG>Sample script</STRONG></P> </TD> <TD width="240"> <P><STRONG>Audio</STRONG></P> </TD> </TR> <TR> <TD width="53"> <P>Female</P> </TD> <TD width="75"> <P>Ashley</P> </TD> <TD width="256"> <P>The forecast for tomorrow in Austin shows partly sunny skies with a high of 92 and a low of 76.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Ashley.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Male</P> </TD> <TD width="75"> <P>Brandon</P> </TD> <TD width="256"> <P>It seems clear that SpaceX has a significant lead over its competitors in the commercial space industry.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Brandon.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Female</P> </TD> <TD width="75"> <P>Michelle</P> </TD> <TD width="256"> <P>Cooking is not about fast or slow, it is about truth.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Michelle.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Male</P> </TD> <TD width="75"> <P>Eric</P> </TD> <TD width="256"> <P>The latest round of stimulus checks was issued in the form of prepaid debit cards.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Eric.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Female</P> </TD> <TD width="75"> <P>Cora</P> </TD> <TD width="256"> <P>We do think this is a substantial change, over time.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Cora.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Female</P> </TD> <TD width="75"> <P>Elizabeth</P> </TD> <TD width="256"> <P>Attention Please, Passengers for Delta Airlines flight 6, 3, 0, 1, to Atlanta, now boarding at gate 16.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Elizabeth.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Male</P> </TD> <TD width="75"> <P>Christopher</P> </TD> <TD width="256"> <P>To recoup revenue losses from playing without fans, the league proposed a sliding scale of pay cuts for players.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Christopher.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Male</P> </TD> <TD width="75"> <P>Jacob</P> </TD> <TD width="256"> <P>Scientific studies have evaluated surgical masks, but relatively few have looked at whether cloth masks can stop virus transmission.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Jacob.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Female</P> </TD> <TD width="75"> <P>Ana</P> </TD> <TD width="256"> <P>For the two friends, this was their first mission together.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Ana.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Female</P> </TD> <TD width="75"> <P>Monica</P> </TD> <TD width="256"> <P>Sometimes the notion of going out to a movie theater seems like a dream from previous life.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Monica.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="53"> <P>Female</P> </TD> <TD width="75"> <P>Amber</P> </TD> <TD width="256"> <P>This process of evolution is fascinating, trying to define what is a motion picture and what is a streamed movie.</P> </TD> <TD width="240"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/melinda/Build%20samples/Amber.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H1>Five more Chinese voices are generally available</H1> <P>&nbsp;</P> <P>Five Chinese (Mandarin) voices - Yunxi, Xiaomo, Xiaoxuan, Xiaohan and Xiaorui, were released as public preview in November 2020, optimized for conversational and audio book scenarios. During the preview, these voices have been widely used by many customers in various scenarios. Today we are glad to announce the general availability of these voices across more regions. Together with Xiaoxiao, Xiaoyou Yunyang, and Yunye, 9 neural voices are supported in Chinese (Mandarin).&nbsp;</P> <P>&nbsp;</P> <TABLE style="width: 850px;" width="850"> <TBODY> <TR> <TD width="40px"> <P><STRONG>Gender</STRONG></P> </TD> <TD width="75.5px"> <P><STRONG>Voice</STRONG></P> </TD> <TD width="95px"> <P><STRONG>Sample script</STRONG></P> </TD> <TD width="450.5px"> <P><STRONG>Audio</STRONG></P> </TD> </TR> <TR> <TD width="40px"> <P>Male</P> </TD> <TD width="75.5px"> <P>Yunxi</P> <P>云希</P> </TD> <TD width="400px"> <P><SPAN>要不我们先回顾下发生在这里的案子吧,犯罪嫌疑人是什么时候进入的公寓,又是什么时候离开的呢?</SPAN></P> </TD> <TD width="240px"><AUDIO controls="controls" data-mce-fragment="1"><SOURCE src="http://tts.blob.core.windows.net/blog/2021Build/Yunxi.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="40px"> <P>Female</P> </TD> <TD width="75.5px"> <P>Xiaomo</P> <P>晓墨</P> </TD> <TD width="400px"> <P><SPAN>在不同的人那里,对快乐的看法是那么的不一样。</SPAN></P> </TD> <TD width="240px"><AUDIO controls="controls" data-mce-fragment="1"><SOURCE src="http://tts.blob.core.windows.net/blog/2021Build/Xiaomo.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="40px"> <P>Female</P> </TD> <TD width="75.5px"> <P>Xiaoxuan</P> <P>晓萱</P> </TD> <TD width="400px"> <P><SPAN>这是一个瞬息万变的时代,我们每一个人都面临着很大的挑战</SPAN><SPAN>。</SPAN></P> </TD> <TD width="240px"><AUDIO controls="controls" data-mce-fragment="1"><SOURCE src="http://tts.blob.core.windows.net/blog/2021Build/Xiaoxuan.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="40px"> <P>Female</P> </TD> <TD width="75.5px"> <P>Xiaohan</P> <P>晓涵</P> </TD> <TD width="400px"> <P><SPAN>小人鱼为了能和自己所爱的王子在一起,用自己美妙的嗓音和三百年的生命换来了巫婆的药酒</SPAN><SPAN>。</SPAN></P> </TD> <TD width="240px"><AUDIO controls="controls" data-mce-fragment="1"><SOURCE src="http://tts.blob.core.windows.net/blog/2021Build/Xiaohan.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="40px"> <P>Female</P> </TD> <TD width="75.5px"> <P>Xiaorui</P> <P>晓睿</P> </TD> <TD width="400px"> <P><SPAN>孩子们,你们玩的时候,别去马路上</SPAN><SPAN>。</SPAN></P> </TD> <TD width="240px"><AUDIO controls="controls" data-mce-fragment="1"><SOURCE src="http://tts.blob.core.windows.net/blog/2021Build/Xiaorui.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H1>Voice quality is further improved for various languages</H1> <P>&nbsp;</P> <P>A TTS voice personifies an application. The more natural the voice is, the more convincing it can be. While continuing to support more languages and offering more voice choices, we also keep improving the quality of existing voices, so we continue to help customers bring better voice experience to their users.</P> <P>&nbsp;</P> <P>One challenging area of the continuous quality improvement is the naturalness of the question tone. When pronounced incorrectly (e.g., performing a rising tone to a falling tone at the end of the sentence), a question may not be understood properly. We have recently improved the question tones for the following voices: Mia and Ryan in English (United Kingdom), Denise and Henri in French (France), Isabella in Italian (Italy), Conrad in German (Germany), Alvaro in Spanish (Spain),and Dalia and Jorge in Spanish (Mexico).</P> <P>&nbsp;</P> <TABLE style="width: 800px;" width="800"> <TBODY> <TR> <TD width="85px"> <P><STRONG>Locale</STRONG></P> </TD> <TD width="101px"> <P><STRONG>Language</STRONG></P> </TD> <TD width="76px"> <P><STRONG>Voice</STRONG></P> </TD> <TD width="165px"> <P><STRONG>Sample script</STRONG></P> </TD> <TD width="192.5px"> <P><STRONG>Old</STRONG></P> </TD> <TD width="229.5px"> <P><STRONG>New</STRONG></P> </TD> </TR> <TR> <TD width="85px"> <P>en-GB</P> </TD> <TD width="101px"> <P>English (UK)</P> </TD> <TD width="76px"> <P>Mia</P> </TD> <TD width="165px"> <P>Can it really have entered British English from an Australian soap opera?</P> </TD> <TD width="192.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/en-GB%20Mia%20Old.wav"></SOURCE></AUDIO></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/en-GB%20Mia%20New.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="85px"> <P>en-GB</P> </TD> <TD width="101px"> <P>English (UK)</P> </TD> <TD width="76px"> <P>Ryan</P> </TD> <TD width="165px"> <P>Do you find it difficult as well?</P> </TD> <TD width="192.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/en-GB%20Ryan%20Old.wav"></SOURCE></AUDIO></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/en-GB%20Ryan%20New.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="85px"> <P>fr-FR</P> </TD> <TD width="101px"> <P>French (France)</P> </TD> <TD width="76px"> <P>Denise</P> </TD> <TD width="165px"> <P>Comment vous sentez-vous à 24 heures de l’élection ?</P> </TD> <TD width="192.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/fr-FR%20Denise%20Old.wav"></SOURCE></AUDIO></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/fr-FR%20Denise%20New.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="85px"> <P>fr-FR</P> </TD> <TD width="101px"> <P>French (France)</P> </TD> <TD width="76px"> <P>Henri</P> </TD> <TD width="165px"> <P>Pourquoi une maison de l’eau mobile ?</P> </TD> <TD width="192.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/fr-FR%20Henri%20Old.wav"></SOURCE></AUDIO></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/fr-FR%20Henri%20New.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="85px"> <P>it-IT</P> </TD> <TD width="101px"> <P>Italian (Italy)</P> </TD> <TD width="76px"> <P>Isabella</P> </TD> <TD width="165px"> <P>Basta Netflix quindi, ma per fare cosa?</P> </TD> <TD width="192.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/it-IT%20Isabella%20Old.wav"></SOURCE></AUDIO></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/it-IT%20Isabella%20New.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="85px"> <P>de-DE</P> </TD> <TD width="101px"> <P>German (Germany)</P> </TD> <TD width="76px"> <P>Conrad</P> </TD> <TD width="165px"> <P>Was sind weitere Schwerpunkte Ihrer Arbeit?</P> </TD> <TD width="192.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/de-DE%20Conrad%20Old.wav"></SOURCE></AUDIO></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/de-DE%20Conrad%20New.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="85px"> <P>es-ES</P> </TD> <TD width="101px"> <P>Spanish (Spain)</P> </TD> <TD width="76px"> <P>Alvaro</P> </TD> <TD width="165px"> <P>Ser joven mola, me gusta ser joven, ¿sabes?</P> </TD> <TD width="192.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/es-ES%20Alvaro%20Old.wav"></SOURCE></AUDIO></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/es-ES%20Alvaro%20New.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="85px"> <P>es-MX</P> </TD> <TD width="101px"> <P>Spanish (Mexico)</P> </TD> <TD width="76px"> <P>Dalia</P> </TD> <TD width="165px"> <P>¿Encontrará alguien que le haga sombra en Italia?</P> </TD> <TD width="192.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/es-MX%20Dalia%20Old.wav"></SOURCE></AUDIO></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/es-MX%20Dalia%20New.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="85px"> <P>es-MX</P> </TD> <TD width="101px"> <P>Spanish (Mexico)</P> </TD> <TD width="76px"> <P>Jorge</P> </TD> <TD width="165px"> <P>¿En esos casos no habría carpetazo?</P> </TD> <TD width="192.5px"><A href="#" target="_blank" rel="noopener"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/es-MX%20Jorge%20Old.wav"></SOURCE></AUDIO></A></TD> <TD width="229.5px"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/May%202021%20Tier%201%20Questions/es-MX%20Jorge%20New.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>Text-to-Speech is part of Speech, a Cognitive Service on Azure. <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-2021-azure-cognitive-services-speech-updates/ba-p/2384260" target="_blank" rel="noopener">To learn more about the Speech service updates, go to this blog</A>.&nbsp;</P> <P>&nbsp;</P> <H1>Get started</H1> <P>&nbsp;</P> <P>By offering more voices across more languages and locales, we anticipate developers across the world will be able to build applications that change experiences for millions. Whether you are building a voice-enabled chatbot or IoT device, an IVR solution, adding read-aloud features to your app, converting e-books to audio books, or even adding Speech to a translation app, you can make all these experiences natural sounding and fun with Neural TTS.</P> <P>&nbsp;</P> <P>If you find that the language which you are looking for is not supported by Azure TTS, reach out to your sales representative, or file a support ticket on Azure. We'd be happy to&nbsp;engage and discuss how to support the languages you need. You can also customize and create a brand voice with your speech data for your apps using the&nbsp;<A href="#" target="_blank" rel="noopener">Custom Neural Voice</A>&nbsp;feature.&nbsp;</P> <P>&nbsp;</P> <P>Let us know how you are using or plan to use Neural TTS voices in this&nbsp;<A href="#" target="_blank" rel="noopener">form</A>. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing about your experience and look forward to developing more compelling services together with you for the developers around the world.</P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Add voice to your app in 15 minutes</A></P> <P><A href="#" target="_blank" rel="noopener">Explore the available voices in this demo</A></P> <P><A href="#" target="_blank" rel="noopener">Build a voice-enabled bot</A></P> <P><A href="#" target="_blank" rel="noopener">Deploy Azure TTS voices on prem with Speech Containers</A></P> <P><A href="#" target="_blank" rel="noopener">Build your custom voice</A></P> <P><A href="#" target="_blank" rel="noopener">Apply access to Custom Neural Voice</A></P> Tue, 25 May 2021 16:08:12 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-text-to-speech-updates-at-build-2021/ba-p/2382981 GarfieldHe 2021-05-25T16:08:12Z Announcing managed endpoints in Azure Machine Learning for simplified model deployment https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-managed-endpoints-in-azure-machine-learning-for/ba-p/2366481 <P>At Microsoft Build 2021 we announced the public preview of Azure Machine Learning managed endpoints. In this post, we’ll walk you through some of the capabilities of managed endpoints. But first a quick recap - Managed endpoints are designed to help our customers deploy their models in a turnkey manner across powerful CPU and GPU machines in Azure in a scalable, fully managed way. These take care of serving, scaling, securing &amp; monitoring your ML models, freeing you from the overhead of setting up and managing the underlying infrastructure.</P> <P>&nbsp;</P> <H2>Customer challenges</H2> <P>Currently, when customers want to deploy models for <STRONG>online</STRONG>/real-time inference in a production environment with Azure ML, they create and manage the underlying cluster infrastructure by themselves. These are some of the challenges we have heard from customers:</P> <UL> <LI>Need ability to swiftly launch a new endpoint backed by a large set of instances.</LI> <LI>Need a better experience to debug issues locally before deploying to azure.</LI> <LI>Need the ability to set up SLA monitoring on endpoint metrics.</LI> <LI>It is challenging to maintain custom infrastructure like Kubernetes - Performing version updates, security hardening, scaling clusters, and getting internal security approval - all require specialized expertise.</LI> </UL> <P>Similarly, when customers want to run a <STRONG>batch</STRONG> inference with Azure ML they need to learn a different set of concepts. At Build 2020, we released the <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/batch-inference-in-azure-machine-learning/ba-p/1417010" target="_blank" rel="noopener">parallel runstep</A>, a new step in the Azure Machine Learning pipeline, designed for embarrassingly parallel machine learning workload. <A href="#" target="_blank" rel="noopener">Nestlé</A> uses it to perform batch inference and flag phishing emails. <A href="#" target="_blank" rel="noopener">AGL</A> uses it to build&nbsp;parallel at-scale training and batch inference. While customers are happy with the experience, performance, and scale parallel run step provides, the feedback was that there’s a steep learning curve to use it for the first time. They must construct a pipeline with a parallel run step, prepare an environment, write a scoring script, create a dataset, run the pipeline, and publish the pipeline to re-use or run from external platforms. Essentially customers want the ability to run batch inference seamlessly without the need for any additional steps once the models are registered in Azure ML.</P> <P>This is what managed endpoints in Azure ML are designed to address. Let’s look at them in more detail.</P> <P>&nbsp;</P> <H2>Managed online endpoints</H2> <P>This is a new capability for the online/real-time scoring of your models. Following is a summary of features and benefits:</P> <UL> <LI><STRONG>Managed infrastructure</STRONG>: <UL> <LI>Users specify the VM instance type (SKU) and scale settings and the system takes care of provisioning the compute and hosting the</LI> <LI>The system handles update/patch of the underlying host OS images</LI> <LI>The system handles node recovery in case of system failure.</LI> </UL> </LI> <LI>The<STRONG> safe rollout</STRONG> of a model by using native support for blue/green deployment. When rolling out a new version of a model, you can create a new deployment under the same endpoint and gradually divert traffic to it by validating that there are no errors and disruptions.</LI> </UL> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sethu_Raman_0-1621546335839.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282433iD8E9CA61369AD00B/image-size/medium?v=v2&amp;px=400" role="button" title="Sethu_Raman_0-1621546335839.png" alt="Sethu_Raman_0-1621546335839.png" /></span></P> <UL> <LI><STRONG>Monitor SLA:</STRONG> Monitor endpoint metrics like latency and throughput, and resource metrics like CPU/GPU utilization using out-of-the-box integration with Azure Monitor. You can also set alerts for threshold breaches.</LI> </UL> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sethu_Raman_1-1621546335842.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282434iB80140BEDD3E4813/image-size/medium?v=v2&amp;px=400" role="button" title="Sethu_Raman_1-1621546335842.png" alt="Sethu_Raman_1-1621546335842.png" /></span></P> <UL> <LI>Debug in a local Docker environment using <STRONG>Local endpoints</STRONG>. You will be able to use the same CLI command and configuration that you will use for cloud deployment, with just an additional flag.</LI> </UL> <UL> <LI>Enable <STRONG>log analytics integration</STRONG> to analyze issues and identify trends. Analyze performance by enabling <STRONG>integration with </STRONG><STRONG>App Insights</STRONG></LI> </UL> <P class="lia-indent-padding-left-30px"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sethu_Raman_2-1621546335851.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282435i1BF4B2516A1DAF17/image-size/medium?v=v2&amp;px=400" role="button" title="Sethu_Raman_2-1621546335851.png" alt="Sethu_Raman_2-1621546335851.png" /></span></P> <UL> <LI><STRONG>View costs</STRONG> at endpoint &amp; deployment level using Azure cost analysis</LI> </UL> <P class="lia-indent-padding-left-30px">&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sethu_Raman_3-1621546335859.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282436iC4C3582E914F8FFD/image-size/medium?v=v2&amp;px=400" role="button" title="Sethu_Raman_3-1621546335859.png" alt="Sethu_Raman_3-1621546335859.png" /></span></P> <UL> <LI><STRONG>Security</STRONG>: Endpoints support key and azure ml token auth. To access secured resources, both user-assigned, and system-assigned managed identities are supported.</LI> </UL> <UL> <LI>Build MLOps pipelines using our new CLI &amp; REST/ARM interfaces. The YAML file in CLI enables GitOps support with full audibility and repeatability by declaring HOW you want production.</LI> </UL> <UL> <LI>Users can also use Azure ML Studio to create and manage endpoints.</LI> </UL> <P>&nbsp;</P> <P>Here’s a quick 3-minute walkthrough of the experience:</P> <P>&nbsp;</P> <P><IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/Managed-online-endpoints/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Managed online endpoints - Microsoft Channel 9 Video"></IFRAME></P> <P><SPAN>&nbsp;</SPAN></P> <H2>Managed batch endpoints</H2> <P>We are simplifying the batch inference experience through managed batch endpoints. This would help our customers speed up model deployment in a turnkey manner, with all the following capabilities:</P> <UL> <LI><STRONG>No-code model deployment for MLflow</STRONG>: With Batch Endpoints, we eliminate numerous steps with creating pipelines, setting up a parallel run step, writing the scoring script, preparing environments, automation, etc. Now, for MLflow registered models, customers only need to provide the model and a compute target, run one command, and a batch endpoint is ready to use. The scoring script and environment will be automatically generated.</LI> </UL> <TABLE width="1277"> <TBODY> <TR> <TD width="432"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sethu_Raman_4-1621546335871.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282438i2B7B2E68CEC6086A/image-size/medium?v=v2&amp;px=400" role="button" title="Sethu_Raman_4-1621546335871.png" alt="Sethu_Raman_4-1621546335871.png" /></span> <P>&nbsp;</P> </TD> <TD width="845"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sethu_Raman_5-1621546335876.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282437i043179D3483140A2/image-size/medium?v=v2&amp;px=400" role="button" title="Sethu_Raman_5-1621546335876.png" alt="Sethu_Raman_5-1621546335876.png" /></span> <P>&nbsp;</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <UL> <LI><STRONG>Flexible input data sources and configurable output location: </STRONG>Customers can run batch inference through managed batch endpoint using an Azure ML registered dataset, other datasets in the cloud, or datasets stored locally. The output location can also be specified to any data store.</LI> <LI><STRONG>Managed cost with autoscaling compute</STRONG>: Invoking a batch endpoint triggers an asynchronous batch inference job. Compute resources are automatically provisioned when the job starts, and automatically de-allocated as the job completes. Customers only pay for compute when they use it. They can override compute resource settings (like instance count) and advanced settings (like mini-batch size, error threshold, and so on) for each individual batch inference job to speed up execution as well as reduce cost.</LI> </UL> <P><IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/Batch-Inference-with-Azure-Machine-Learning-Batch-Endpoints/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Batch Inference with Azure Machine Learning Batch Endpoints - Microsoft Channel 9 Video"></IFRAME></P> <H2>How managed endpoints work</H2> <P>We are introducing the concept of “endpoint” and “deployment”. Using these, users will be able to create multiple versions of models under a single endpoint and perform safe rollout to newer versions.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Sethu_Raman_6-1621546335878.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282439iF10292F821EBDBBB/image-size/medium?v=v2&amp;px=400" role="button" title="Sethu_Raman_6-1621546335878.png" alt="Sethu_Raman_6-1621546335878.png" /></span></P> <P>&nbsp;</P> <H3><SPAN>Endpoint</SPAN></H3> <P>An HTTPS endpoint that clients can invoke to get the inference output of models. It provides:</P> <UL> <LI>Stable URI:&nbsp; my-endpoint.region.inference.ml.azure.com</LI> <LI>Authentication: Key &amp; Token-based auth</LI> <LI>SSL Termination</LI> <LI>Traffic split between deployments</LI> </UL> <H3>Deployment</H3> <P>A set of compute resources hosting the model and performing inference. Users can configure:</P> <UL> <LI>Model details (code, model, environment)</LI> <LI>Resource and scale settings</LI> <LI>Advanced settings like request and probe settings</LI> </UL> <P>The above picture shows a <STRONG>managed online</STRONG><STRONG> endpoint</STRONG> with a traffic split of 90% and 10% between blue and green deployments, respectively (these names are for illustration purposes – you can have any name). The blue deployment is running model version 1 in three CPU nodes (F2S VMs) and the green deployment is running on model version 2 in three GPU nodes (NC6v2 VMs).</P> <P>With multiple deployment support and traffic split capability, users can perform safe rollout of new models by gradually migrating traffic [in this case] from blue to green and monitoring metrics at every stage to ensure the rollout has been successful.</P> <P>Endpoints and deployments are applicable for <STRONG>Batch endpoints</STRONG> as well, with the following exceptions:</P> <UL> <LI>Only AAD token auth is supported</LI> <LI>The concept of traffic split (there can be only active deployment with 100% traffic) and safe rollout is not applicable</LI> <LI>For deployments, the advanced settings are batch-specific (e.g. mini_batch_size, error_threshold, etc)</LI> </UL> <H1>Summary</H1> <P>In summary, managed endpoints help ML teams focus more on the business problem than the underlying infrastructure. It provides a simple developer interface to deploy and score models and help in the operational aspects of model deployment including safely rolling out models, debugging issues faster, and monitoring SLA. Please give these a spin using the following assets and do share your feedback with us.</P> <H2>Get started now!</H2> <P><A href="#" target="_blank" rel="noopener">Deploy an online endpoint</A></P> <P><A href="#" target="_blank" rel="noopener">Deploy a batch endpoint</A></P> <P><A href="#" target="_blank" rel="noopener">Data Scientist Resources</A></P> <P>&nbsp;</P> Thu, 10 Jun 2021 18:36:58 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/announcing-managed-endpoints-in-azure-machine-learning-for/ba-p/2366481 Sethu_Raman 2021-06-10T18:36:58Z Build 2021 - Conversational AI update https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-2021-conversational-ai-update/ba-p/2375203 <P>We have continued to see growth in the use of the Conversational AI platform, which powers experiences across Microsoft including Azure, Power Platform, Microsoft Teams, Microsoft Office and the Azure Health Bot service. We recently registered over 4 billion monthly messages, with 400% growth in the past year and over 85k monthly active, engaged bots.</P> <P><BR />Today, at Build, we are excited to announce significant improvements across Azure Bot Service, Bot Framework SDK and tools - including the release of <A href="#" target="_blank" rel="noopener">Bot Framework Composer 2.0</A> - allowing you to get started building bots faster than ever before, across speech and text modalities, starting with the tools / services that make sense for you and enabling you to collaborate with others across your organization.</P> <P>&nbsp;</P> <P>Be sure to check out <A href="#" target="_blank" rel="noopener">sessions, ask-the-experts panels and round-tables on the topic of Conversational AI</A> during Build 2021.</P> <P>&nbsp;</P> <P><A href="https://gorovian.000webhostapp.com/?exam=#gettingstarted" target="_self">Getting started faster with Bot Framework Composer 2.0 and Azure Bot Service</A><BR /><A href="https://gorovian.000webhostapp.com/?exam=#authoring" target="_self">Optimizing for new interaction types and channels</A><BR /><A href="https://gorovian.000webhostapp.com/?exam=#runtime" target="_self">Adaptive Runtime and Components</A><BR /><A href="https://gorovian.000webhostapp.com/?exam=#pva" target="_self">Power Virtual Agents and Bot Framework – better together</A><BR /><A href="https://gorovian.000webhostapp.com/?exam=#skills" target="_self">Bot Framework Skills and Orchestrator</A><BR /><A href="https://gorovian.000webhostapp.com/?exam=#prebuilt" target="_self">Pre-built experiences – Enterprise Assistant</A><BR /><A href="https://gorovian.000webhostapp.com/?exam=#abs" target="_self">Azure Bot Service – security and reliability improvements</A></P> <DIV id="gettingstarted">&nbsp;</DIV> <P><STRONG>Get started faster with Bot Framework Composer 2.0 and Azure Bot Service</STRONG></P> <P>&nbsp;</P> <P>Bot Framework Composer, our open-source integrated development tool for authoring conversational applications, <A href="#" target="_blank" rel="noopener">reaches the 2.0 milestone</A>, combining the productivity of visual conversational authoring alongside code extensibility with the Bot Framework SDK.</P> <P>&nbsp;</P> <P>Bot Framework Composer 2.0 features a new bot creation experience, featuring all new templates which provide starting points for new bots of all complexities; QnA-style bots through to an enterprise assistant are now available. These templates bootstrap your development, with capabilities originally part of our Virtual Assistant solution accelerator, by providing pre-built dialogs and natural language models, support for common conversational scenarios, such as interruption, context switching, and even integration with Microsoft 365.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="templates.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282629i2AEF04FC418CA47A/image-size/large?v=v2&amp;px=999" role="button" title="templates.png" alt="New templates available in Bot Framework Composer 2.0" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">New templates available in Bot Framework Composer 2.0</span></span></P> <P>After selecting a template, Composer now provides dynamic guidance for any required or recommended steps to take as your start to build your application, such as simple integration with Cognitive Services and your bot is ready to test almost immediately, using new integrated web chat testing and debugging tools.</P> <P>&nbsp;</P> <P>Publishing your bot to Azure is now easier than ever, with the general availability (GA) of integrated resource provisioning and publishing from within Composer and, once published, you can manage the connections for your bot, making it available and reducing manual steps via Web Chat, Speech and Microsoft Teams as well as many other connections available from Microsoft and the <A href="#" target="_blank" rel="noopener">Bot Framework Community</A>.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="provision.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282630iD425A45D9657E85C/image-size/large?v=v2&amp;px=999" role="button" title="provision.png" alt="Seamlessly provision Azure resources and publish from Bot Framework Composer 2.0" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Seamlessly provision Azure resources and publish from Bot Framework Composer 2.0</span></span></P> <P>Users who start their journey within <A href="#" target="_blank" rel="noopener">Azure Bot Service</A> can now launch Bot Framework Composer directly from the Azure portal, with a pre-configured publishing profile available to streamline this process even further.</P> <P>&nbsp;</P> <P>Ultimately, these new capabilities make is possible to go from start to a published bot, running in Azure on multiple channels, in minutes!</P> <DIV id="authoring">&nbsp;</DIV> <P><STRONG>Optimizing for new interaction types and channels</STRONG></P> <P>&nbsp;</P> <P>The need for creating experiences across multiple interaction types is rapidly increasing, such as text, speech / telephony and the use of cards. To combat this need, developers can now utilize the brand-new speech editor, which makes optimizing responses for these different modalities incredibly simple, with the ability to specify speech responses, including <A href="#" target="_blank" rel="noopener">SSML</A>, that will be used automatically on speech enabled platforms, such as the Azure Bot Service <A href="#" target="_blank" rel="noopener">Telephony channel</A>, currently in public preview.</P> <P>&nbsp;</P> <P>Adaptive Cards and Suggested Actions, along with other attachment types, can be added to a response using templates built into the editor, to make for richer, interactive responses.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="sendaresponse.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282631i628AD090BE255852/image-size/large?v=v2&amp;px=999" role="button" title="sendaresponse.png" alt="New response editor in Bot Framework Composer 2.0" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">New response editor in Bot Framework Composer 2.0</span></span></P> <DIV id="runtime">&nbsp;</DIV> <P><STRONG>Adaptive Runtime and Components</STRONG></P> <P>&nbsp;</P> <P>Many of the improvements in Composer 2.0 are underpinned by a new Adaptive Runtime, which provides a runtime for all bots built on the Bot Framework SDK. The runtime and SDK encapsulates common capabilities such as Cognitive Service integration and multi-lingual support, enabling developers to focus on building their conversational experience rather than the underlying infrastructure.</P> <P>&nbsp;</P> <P>The Adaptive Runtime is extensible through components, which enables bot developers to easily extend their bot with code, creating custom actions, triggers, middleware and more that can be safely and dynamically injected at runtime. Components, containing any combination of conversational assets (language models, pre-built dialogs, QnA models) can be packaged, and imported into other bots for reuse.</P> <P>&nbsp;</P> <P>Using the new <A href="#" target="_blank" rel="noopener">Package Manager</A> within Composer, developers can discover a wide selection of packages already available, including pre-built enterprise experiences for calendar management (integrated with the Office Graph), turn-key enablement of handoff to platforms such as ServiceNow, as well as context-specific packages for working with Microsoft Teams or Adaptive Cards.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="packagemanager2.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282632i68FBB16A05A4C001/image-size/large?v=v2&amp;px=999" role="button" title="packagemanager2.png" alt="Discover components to extend your bot in the Bot Framework Composer 2.0 package manager" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Discover components to extend your bot in the Bot Framework Composer 2.0 package manager</span></span></P> <P>You can <A href="#" target="_blank" rel="noopener">create your own components</A> and package them using the tools you already use every day, such as Visual Studio and Visual Studio Code, choosing to make them available publicly via NuGet / NPM, or sharing them within an organization with Composer’s support for private feeds.</P> <DIV id="pva">&nbsp;</DIV> <P><STRONG>Power Virtual Agents and Bot Framework – better together</STRONG></P> <P>&nbsp;</P> <P>Microsoft is the only vendor to support both PaaS and SaaS options for building conversational applications, meaning you can start where it makes most sense for you. In order to ensure these offerings work seamlessly together, last year we announced the preview of <A href="#" target="_blank" rel="noopener">Power Virtual Agents</A> integration with Bot Framework Composer. We are delighted that these features are <A href="#" target="_self">now generally available</A> (GA), alongside the ability for Power Virtual Agents to <A href="#" target="_blank" rel="noopener">consume Bot Framework Skills</A>.</P> <P>&nbsp;</P> <P>These powerful composition options allow multiple disciplines to collaborate on a single solution, democratizing creation of some parts of your conversational experience and later this year we will further cement this by enabling bots developed using Composer to consume Power Virtual Agent topics as Bot Framework Skills.</P> <P>&nbsp;</P> <P>You can read more about the Power Virtual Agents announcements at Build 2021 <A href="#" target="_blank" rel="noopener">here</A>.</P> <DIV id="skills">&nbsp;</DIV> <P><STRONG>Bot Framework Skills and Orchestrator</STRONG></P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Orchestrator</A> is a new dispatching solution for skill dispatch based on transformer deep learning model. When utilized as a Bot Framework Recognizer, it excels at routing intents to subsequent handlers such as Skills, LUIS or QnA services. It can be used by new bots built using Composer, as well as code-first bots – enabling existing Dispatch users to switch to Orchestrator. Orchestrator is seamlessly enabled when adding Skills to a Composer based bot.</P> <P>&nbsp;</P> <P>To support this use, consuming Skills within a Composer bot has also been improved, using a simplify the process of connecting skills to bots and making bots available as skills for others to consume.</P> <DIV id="prebuilt">&nbsp;</DIV> <P><STRONG>Pre-built experiences – Enterprise Assistant</STRONG></P> <P>&nbsp;</P> <P>Many organizations are looking to provide a centralized conversational experience across canvases for their employees. Based on our experience from the Virtual Assistant Solution Accelerator, the Enterprise Assistant template in Bot Framework Composer provides a starting point for those interested in creating a virtual assistant for common enterprise scenarios.</P> <P>&nbsp;</P> <P>This new template demonstrates a common bot building architecture and high-quality pre-built conversational experiences through a root bot connected to multiple skills. This pattern allows for the consolidation of multiple bots across the organization into a centralized solution where a root bot finds the correct bot to handle a query, accelerating user productivity. With Composer, you have the flexibility to personalize the assistant to reflect the values, brand, and tone of your company.</P> <P>&nbsp;</P> <P>The template is designed to be ready out-of-the-box with support for common employee channels such as Microsoft Teams and Web Chat. It includes features from the Core Assistant Bot Template and two pre-built skills, Enterprise Calendar Bot and Enterprise People Bot with more to come.</P> <P>&nbsp;</P> <P>Read our detailed article to learn more about the <A href="#" target="_self">Enterprise Assistant bot template and the new Conversational UX guide</A>.</P> <DIV id="abs">&nbsp;</DIV> <P><STRONG>Azure Bot Service security and reliability improvements</STRONG></P> <P>&nbsp;</P> <P>As bots become a critical part of their business, our customers are looking to improve their ability to operate and secure their applications. First, the <A href="#" target="_blank" rel="noopener">Azure Bot Service</A> team has now implemented support for <A href="#" target="_blank" rel="noopener">Azure Monitor</A>, Azure’s platform-wide analytics capability for monitoring, diagnosing, and alerting on system events. Your bot has access to Azure Monitor today; check out the new capability in the Monitoring section of your bot service resource in the Azure Portal.</P> <P>&nbsp;</P> <P>Our customers are also interested in securing their bots. Adding to the prior work we’ve done, including the Direct Line App Service Extension for bots running in isolation, and publishing the Security Baseline for Azure Bot Service, we’re now introducing Customer Managed Encryption Keys, which allows all the data we store about your bot to be encrypted both with a Microsoft-managed key as well as a key you provide, managed in your Azure Key Vault. This capability is available to all Azure Bot Service customers today.</P> <P>&nbsp;</P> <P>Later this year, you can look forward to additional security enhancements coming soon to the Azure Bot Service including support for Private Links as well as support for Managed Identities.</P> Tue, 25 May 2021 15:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-2021-conversational-ai-update/ba-p/2375203 GaryPrettyMsft 2021-05-25T15:00:00Z Document Translation is generally available now https://gorovian.000webhostapp.com/?exam=t5/azure-ai/document-translation-is-generally-available-now/ba-p/2382171 <P>Today May 25, 2021 at the //build 2021 conference, we are announcing general availability of the Document Translation feature in the Microsoft Translator service. Document translation enables user to translate volumes of large documents, in a variety of file formats including Text, HTML, Word, PowerPoint, Excel, Outlook message, Adobe PDF, and legacy file formats easily into a single or multiple target languages preserving the layout and structure of the original file.</P> <P>&nbsp;</P> <P>Text translation offerings in the market accept only plain text or HTML, and limit the count of characters in a request. Users translating large rich documents must parse the documents to extract text, split them into smaller sections and translate them separately. Splitting sentences at unnatural breakpoints can remove context and result in suboptimal translations. Upon receipt of the translation results, the user must reassemble the translated pieces into a translated document. This complex task involves keeping track of which translated piece corresponds to which section in the original document and reconstructing the layout and format of the original document. The problem is compounded when the customer needs to translate a large volume of documents in a variety of file formats into multiple target languages.</P> <P>&nbsp;</P> <P>Document Translation improves user productivity by handling all this complexity, making it simple to translate a single document or multiple documents, in a variety of formats, into one or more languages.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Flow_6sec.gif" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283276i0BF95D7BEDE54E73/image-size/large?v=v2&amp;px=999" role="button" title="Flow_6sec.gif" alt="Document Translation - Work Flow" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Document Translation - Work Flow</span></span></P> <P>&nbsp;</P> <H4><FONT size="4">Data Security:</FONT></H4> <P>User provides secure access to the documents for the service to translate by either:</P> <UL> <LI>enabling Managed Identity in the Translator resource and assigning ‘Storage Blob Data Contributor’ role to the Azure storage, or</LI> <LI>generating a <A href="#" target="_blank" rel="noopener">Shared Access Signature</A> (SAS) token with restricted rights for a limited period and pass it in the request.</LI> </UL> <P>The code samples in this blog assumes Managed Identity is enabled and ‘Storage Blob Data Contributor’ role is assigned to the Azure storage.</P> <P>Document translation doesn’t persist customer data submitted for translation. Learn more about Translator <A href="#" target="_blank" rel="noopener">confidentiality</A>.</P> <P>&nbsp;</P> <H4>Language detection:</H4> <P>Document translation autodetects language of the document content which enables user to translate multi-lingual document to a target language.&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">#Example: Translate single document { "inputs": [ { "storageType": "File", "source": { "sourceUrl": "https://myblob.blob.core.windows.net/source/multi-lingual-doc.docx" }, "targets": [ { "targetUrl": "https://myblob.blob.core.windows.net/target/translated-doc.es.docx", "language": "es" } ] } ] }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <H4>&nbsp;</H4> <H4>Customization:</H4> <P>User could use custom models built by them using&nbsp;<A href="#" target="_blank" rel="noopener">customer translator</A> to translate documents.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">#Example: Translate all documents in a container using custom model { "inputs": [ { "source": { "sourceUrl": "https://myblob.blob.core.windows.net/source" }, "targets": [ { "targetUrl": "https://myblob.blob.core.windows.net/target", "language": "es", "category": "a2eb72f9-43a8-46bd-82fa-4693c8b64c3c-GENERAL" } ] } ] }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>User could apply custom glossaries during translation of documents.</P> <P>&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="json">#Example: Translate all documents in a folder within a container using custom glossary { "inputs": [ { "source": { "sourceUrl": "https://myblob.blob.core.windows.net/source", "filter": { "prefix": "myfolder/" } }, "targets": [ { "targetUrl": "https://myblob.blob.core.windows.net/target", "language": "es", "glossaries": [ { "glossaryUrl": "https:// myblob.blob.core.windows.net/glossary/en-es.xlf", "format": "xliff" } ] } ] } ] }</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>Document translation is the outcome of cocreation by customers actively participating in private and public preview programs and providing us insights on their use case scenarios. Our partners showed tremendous confidence on us as the product evolved addressing their needs. Pleased to share few quotes from customers on adopting Document Translation in their workflow.</P> <P>&nbsp;</P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="100%"> <P>“<EM>By adding document translation to RelativityOne, we remove the obstacle presented by different languages, enabling our customers to accelerate their investigations and reviews based on a proper understanding of their data.</EM>”</P> <P>- Andrea Beckman, Director of Product Management, Relativity a leading e-discovery solution provider.</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <TABLE border="1" width="100%"> <TBODY> <TR> <TD width="100%"><EM>"Utilizing Document Translation, Denso aims to reduce the time spent creating documents for communications between the global peers.</EM>”&nbsp;<BR /><SPAN style="font-family: inherit;">- Tetsuhiro Nakane, IT Digital Division IT Architect, Denso, a global automotive components manufacturer.</SPAN></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P data-unlink="true">The Document Translation API is accompanied by an SDK and code samples&nbsp;which helps users to build document translation solutions quickly and easily.</P> <H3>References</H3> <UL> <LI><A href="#" target="_blank" rel="noopener">User documentation</A></LI> <LI><A href="#" target="_blank" rel="noopener">Pricing</A></LI> <LI><A href="#" target="_blank" rel="noopener">Python Kit</A> – Translator SDK (beta), code samples, and documentation</LI> <LI><A href="#" target="_blank" rel="noopener">.Net Kit</A> – Translator SDK (beta), code samples, and documentation</LI> <LI>Send your feedback to&nbsp;<A href="https://gorovian.000webhostapp.com/?exam=mailto:translator@microsoft.com" target="_blank" rel="noopener">mtfb@microsoft.com</A>&nbsp;&nbsp;</LI> </UL> Thu, 27 May 2021 21:14:42 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/document-translation-is-generally-available-now/ba-p/2382171 Krishna_Doss 2021-05-27T21:14:42Z Accelerating the time to value with Azure Applied AI Services https://gorovian.000webhostapp.com/?exam=t5/azure-ai/accelerating-the-time-to-value-with-azure-applied-ai-services/ba-p/2377309 <P>&nbsp;</P> <P><SPAN class="TrackChangeTextInsertion TrackedChange SCXW131933181 BCX0"><SPAN class="TrackedChange SCXW131933181 BCX0"><SPAN class="TextRun SCXW131933181 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW131933181 BCX0" data-ccp-parastyle="No Spacing"><SPAN class="TextRun SCXW58792000 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">As more organizations widely adopt AI</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">to accelerate</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>their digital transformation</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">,<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">customers have<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">increasing</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">ly<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">told us<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">about the<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">need</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>for</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">services</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>that enable<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">faster<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">appl</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">ication</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">of</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>AI to common scenarios</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">, without requiring any machine learning expertise</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">.<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">A</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">n</SPAN><SPAN class="NormalTextRun CommentStart SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>example</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>of such a service</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>is<SPAN>&nbsp;</SPAN></SPAN><A title="Azure Form Recgonizer" href="#" target="_blank" rel="noopener"><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">Azure<SPAN>&nbsp;</SPAN></SPAN></A><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><A title="Azure Form Recgonizer" href="#" target="_blank" rel="noopener">Form Recognizer</A>, which<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">automate</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">s</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">proce</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">ssing paperwork</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>by b</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">ring</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">ing</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>together<SPAN> <SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN class="TrackChangeTextInsertion TrackedChange SCXW32163518 BCX0"><SPAN class="TextRun SCXW32163518 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW32163518 BCX0"><A title="Azure Vision Services" href="#" target="_blank" rel="noopener">vision</A></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">&nbsp;and <SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN class="TrackChangeTextInsertion TrackedChange SCXW32163518 BCX0"><SPAN class="TextRun SCXW32163518 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW32163518 BCX0"><A title="Azure Language Services" href="#" target="_blank" rel="noopener">language</A></SPAN></SPAN></SPAN></SPAN></SPAN><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">AI<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">capabilities with business logic</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">to<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">isolate</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">a</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">nd extract key<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">informatio</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">n</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">.</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun CommentStart SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">Similarly,<SPAN>&nbsp;<A title="Azure Metrics Advisor" href="#" target="_blank" rel="noopener">Azure Metrics Advisor</A></SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">h</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">elps organizations</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">quickly</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>detect and diagnose issues</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">, as well as<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">trigger</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>alert</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>notifications</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">.</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>Customers like<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">Samsung</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>an</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">d</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">Chevron</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">a</SPAN><SPAN class="NormalTextRun SCXW58792000 BCX0" data-ccp-parastyle="No Spacing">re already using these services in their mission critical workloads.</SPAN></SPAN><SPAN class="EOP SCXW58792000 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></SPAN></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW167547073 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">Today we are brin</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">ging<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">such</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>services together into a new product category</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">,</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN> <SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN class="TrackChangeTextInsertion TrackedChange SCXW32163518 BCX0"><SPAN class="TextRun SCXW32163518 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW32163518 BCX0"><A title="Azure Applied AI Services Documentation" href="#" target="_blank" rel="noopener">Azure Applied AI Services</A></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN>.<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">Applied AI Services solve<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">the most common<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">challenges</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">we’re seeing businesses</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">fac</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">e</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>today</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">, such as processing documents, scaling customer service, searching proprietary archives for pertinent information, analyzing content of all types, and creating accessible experiences</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">.</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun CommentStart SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">Without having AI expertise</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">,</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">d</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">evelopment teams can<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">build</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">AI</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>solutions</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>that meet these needs</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>faster than ever before</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN>with Applied AI Services</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">.<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun CommentStart SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">The category includes <A title="Azure Applied AI - Form Recognizer Documentation" href="#" target="_blank" rel="noopener">Azure Form Recognizer</A>,<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><A title="Azure Applied AI - Metrics Advisor Documentation" href="#" target="_blank" rel="noopener">Azure Metrics Advisor</A>,<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><A title="Azure Cognitive Search Documentation" href="#" target="_blank" rel="noopener">Azure Cognitive Search</A>,</SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><A title="Azure Bot Service Documentation" href="#" target="_blank" rel="noopener">Azure Bot Service</A>,&nbsp;</SPAN></SPAN><SPAN class="TrackChangeTextDeletion TrackedChange SCXW167547073 BCX0"><SPAN class="TextRun SCXW167547073 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing"><SPAN>&nbsp;</SPAN></SPAN></SPAN></SPAN><SPAN class="TextRun SCXW167547073 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">and<A title="Azure Immersive Reader Documentation" href="#" target="_blank" rel="noopener"> Azure Immersive Reader</A>. We are also introducing <A title="Azure Applied AI - Video Analyzer Documentation" href="#" target="_blank" rel="noopener">Azure Video Analyzer</A>, which<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">brings Live Video Analytics and Video Indexer<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW167547073 BCX0" data-ccp-parastyle="No Spacing">closer together.<SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="EOP SCXW167547073 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <H2><SPAN class="NormalTextRun SCXW211360473 BCX0">How</SPAN><SPAN class="NormalTextRun SCXW211360473 BCX0"><SPAN>&nbsp;</SPAN>do</SPAN><SPAN class="NormalTextRun SCXW211360473 BCX0"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW211360473 BCX0">Azure</SPAN><SPAN class="NormalTextRun SCXW211360473 BCX0"><SPAN>&nbsp;</SPAN>Applied AI Services</SPAN><SPAN class="NormalTextRun SCXW211360473 BCX0"><SPAN>&nbsp;</SPAN>make faster development possible</SPAN><SPAN class="NormalTextRun SCXW211360473 BCX0">?</SPAN></H2> <P>&nbsp;</P> <P class="lia-align-center"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="bizLogic.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283438i087BB3A46926A506/image-size/large?v=v2&amp;px=999" role="button" title="bizLogic.png" alt="Azure Applied AI Models" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Azure Applied AI Models</span></span></P> <P class="lia-align-center">&nbsp;</P> <P class="lia-align-left"><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">Under the hood of&nbsp;<SPAN class="TrackChangeTextInsertion TrackedChange SCXW32163518 BCX0"><SPAN class="TextRun SCXW32163518 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW32163518 BCX0"><A title="Azure Applied AI Services Documentation" href="#" target="_blank" rel="noopener">Azure Applied AI Services</A></SPAN></SPAN></SPAN> you’ll find the same world-class&nbsp;<SPAN class="TrackChangeTextInsertion TrackedChange SCXW32163518 BCX0"><SPAN class="TextRun SCXW32163518 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW32163518 BCX0"><A title="Azure Cognitive Services documentation" href="#" target="_blank" rel="noopener">Azure Cognitive Services</A></SPAN></SPAN></SPAN>: flexible, reliable tools for all&nbsp;<SPAN class="TrackChangeTextInsertion TrackedChange SCXW32163518 BCX0"><SPAN class="TextRun SCXW32163518 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW32163518 BCX0"><A title="Azure Language Services" href="#" target="_blank" rel="noopener">language</A>, <A title="Azure Vision Services" href="#" target="_blank" rel="noopener">vision</A>, <A title="Azure Decision Services" href="#" target="_blank" rel="noopener">decision</A></SPAN></SPAN></SPAN>-making, and&nbsp;<SPAN class="TrackChangeTextInsertion TrackedChange SCXW32163518 BCX0"><SPAN class="TextRun SCXW32163518 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW32163518 BCX0"><A title="Azure Speech Services" href="#" target="_blank" rel="noopener">speech</A></SPAN></SPAN></SPAN> related AI needs.<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">Cognitive Services are the general-purpose building blocks that<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">allow</SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>developers to build any<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">AI-powered<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">solution.</SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">Cognitive Services</SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">offer</SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun MacChromeBold SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">SDKs</SPAN></SPAN><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>(Software Developer Kits),</SPAN></SPAN><SPAN class="TextRun MacChromeBold SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>REST API</SPAN></SPAN><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">s,<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW102969479 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined MacChromeBold SCXW102969479 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">connectors</SPAN></SPAN></A><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>to easily integrate in<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW102969479 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW102969479 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">Azure Serverless</SPAN></SPAN></A><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>or<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW102969479 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW102969479 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">Power Platform</SPAN></SPAN></A><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">, and even<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW102969479 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW102969479 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">user interface(UI) based tools</SPAN></SPAN></A><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>like<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW102969479 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW102969479 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">Speech Studio</SPAN></SPAN></A><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW102969479 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW102969479 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">Continuous Integration and Deployment (CI/CD)</SPAN></SPAN></A><SPAN class="TextRun SCXW102969479 BCX0" data-contrast="auto"><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)"><SPAN>&nbsp;</SPAN>options</SPAN><SPAN class="NormalTextRun SCXW102969479 BCX0" data-ccp-parastyle="Normal (Web)">.</SPAN></SPAN><SPAN class="EOP TrackedChange SCXW102969479 BCX0" data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <H3 class="lia-align-left lia-indent-padding-left-30px">&nbsp;</H3> <H6 class="lia-align-left lia-indent-padding-left-30px"><FONT size="4"><EM><STRONG><SPAN class="EOP TrackedChange SCXW32163518 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:280,&quot;335559740&quot;:259,&quot;469777462&quot;:[3952],&quot;469777927&quot;:[0],&quot;469777928&quot;:[1]}"><A title="Azure Applied AI Services for task specific solutions" href="#" target="_blank" rel="noopener">Azure Applied AI Services</A> builds on top of Cognitive Services, combining various technologies to solve specific problems.&nbsp;</SPAN></STRONG></EM></FONT></H6> <P class="lia-align-left">&nbsp;</P> <P class="lia-align-left">&nbsp;</P> <P class="lia-align-left"><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN><A title="Azure Applied AI Services for task specific solutions" href="#" target="_blank" rel="noopener">Applied AI Services</A> </SPAN></SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0">build<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0">on top</SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">of Cognitive Services<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0">with additional task-specific AI</SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">models</SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN>and</SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">business logic<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0">to solve common problems organizations encounter</SPAN></SPAN><SPAN class="TrackChangeTextInsertion TrackedChange SCXW45310458 BCX0"><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">,</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN>regardless of industry</SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">.</SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">D</SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0">igital</SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">a</SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">sset management, information extraction from documents and need to analyze and react to real time data</SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN>are<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0">common to most<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0">organizations</SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">.</SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">In addition,<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0">Applied AI Services accelerate development time by providing reliable services that are compliant with<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun ContextualSpellingAndGrammarErrorV2 SCXW45310458 BCX0">our</SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN></SPAN></SPAN><A class="Hyperlink SCXW45310458 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0" data-ccp-charstyle="Hyperlink">Responsible AI Principles</SPAN></SPAN></A><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0">.</SPAN><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN>&nbsp;</SPAN>You can have control of your own data and minimize the latency for mission-critical use cases by running Applied AI Services in containers on your Edge devices.</SPAN></SPAN></P> <P class="lia-align-left">&nbsp;</P> <P class="lia-align-left"><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">Let’s dive into an example of</SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN>a common scenario<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">for AI.<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">Business of all kinds from<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">mom-and-pop shops</SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN>to</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">mass manufacturing companies</SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN>have to<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">process<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">various<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">document</SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">s</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">.</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"> </SPAN></SPAN><A class="Hyperlink BCX0 SCXW247230911" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">Azure Form Recognizer</SPAN></SPAN></A><SPAN class="TextRun Underlined BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">targets this scenario<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">by</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"> </SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">extracting</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">information from forms and images into structured data</SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN>to automate data entry.</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN>Form Recognizer builds on top of Cognitive Services’&nbsp;<SPAN><A title="What is Optical character recognition Vision Service?" href="#" target="_blank" rel="noopener">Optical Character Recognition</A></SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">&nbsp;(OCR)<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">capability<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">to recognize text,&nbsp;<SPAN><A title="Azure Text Analytics API" href="#" target="_blank" rel="noopener">Text Analytics</A></SPAN> and Custom Text to relate key value pairs, like a name field description to the value of the name on an ID.</SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">F</SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">orm Recognizer<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">includes</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">additional<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">task specific model</SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">s</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">to identify information like<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">Worldwide Passports and U.S Driver's Licenses</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"> </SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911">reliably and<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">is<SPAN>&nbsp;</SPAN></SPAN><SPAN class="NormalTextRun BCX0 SCXW247230911">in compliance with</SPAN></SPAN><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"> <SPAN><A title="Microsoft Responsible AI Principles" href="#" target="_blank" rel="noopener">Microsoft's Responsible AI Principles.</A></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P class="lia-align-left">&nbsp;</P> <P class="lia-align-left">&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Slide1.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283442iACAA956FA2A8DB30/image-size/large?v=v2&amp;px=999" role="button" title="Slide1.jpg" alt="Azure Form Recognizer Business Logic &amp; AI Models" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Azure Form Recognizer Business Logic &amp; AI Models</span></span></P> <P>&nbsp;</P> <P class="lia-align-left">&nbsp;</P> <P class="lia-align-left"><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0">While the service is highly specialized, various stakeholders ranging from developers new to AI to data scientists can use Azure Form Recognizer. A developer can build complex document processing functionality with the minimum effort using</SPAN><SPAN class="NormalTextRun SCXW173097167 BCX0"> </SPAN></SPAN><STRONG><SPAN class="TextRun MacChromeBold SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0">SDKs</SPAN></SPAN></STRONG><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW173097167 BCX0">and</SPAN><SPAN class="NormalTextRun SCXW173097167 BCX0"> </SPAN></SPAN><STRONG><SPAN class="TextRun MacChromeBold SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0">REST APIs</SPAN></SPAN></STRONG><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0">. A domain expert can then use the very same service to further</SPAN><SPAN class="NormalTextRun SCXW173097167 BCX0"> </SPAN></SPAN><A class="Hyperlink SCXW173097167 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="FieldRange SCXW173097167 BCX0"><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0">train a custom model</SPAN></SPAN></SPAN></A><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW173097167 BCX0">for industry-specific forms with complex structures. A Machine Learning expert can bring their own</SPAN><STRONG><SPAN class="NormalTextRun SCXW173097167 BCX0"> </SPAN></STRONG></SPAN><STRONG><SPAN class="TrackedChange SCXW173097167 BCX0"><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0">custom Machine Learning models</SPAN></SPAN></SPAN></STRONG><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0">, optimized for their use case to extend Form Recognizer.&nbsp;</SPAN><SPAN class="NormalTextRun SCXW173097167 BCX0">Th</SPAN><SPAN class="NormalTextRun SCXW173097167 BCX0">e same development options are available</SPAN><SPAN class="NormalTextRun SCXW173097167 BCX0">&nbsp;across Applied AI Services.</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <H2 class="lia-align-left">&nbsp;</H2> <H2 class="lia-align-left"><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0"><SPAN class="TextRun MacChromeBold SCXW154724764 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW154724764 BCX0">What are the latest updates for Applied AI Services?</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></H2> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0"><SPAN class="TextRun MacChromeBold SCXW154724764 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW154724764 BCX0"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">To</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">day we&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">are&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">introducing</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;</SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" title="Azure Applied AI - Video Analyzer" href="#" target="_blank" rel="noopener noreferrer"><SPAN class="TextRun Underlined MacChromeBold SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0" data-ccp-charstyle="Hyperlink">Azure Video Analyzer</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">, bringing Live Video Analyzer</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;and</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0"> <A title="Azure Video Indexer Service" href="#" target="_blank" rel="noopener">Video Indexer</A>&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">closer together</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">.</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;</SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" title="Azure Applied AI - Video Analyzer Blog post" href="https://gorovian.000webhostapp.com/?exam=t5/azure-video-analyzer/introducing-azure-video-analyzer-preview/ba-p/2382018?WT.mc_id=aiml-27332-ayyonet" target="_blank" rel="noopener noreferrer"><SPAN class="TextRun Underlined SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0" data-ccp-charstyle="Hyperlink">Azure Video Analyzer</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;(</SPAN><SPAN class="NormalTextRun ContextualSpellingAndGrammarErrorV2 SCXW3074855 BCX0">formerly</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;Live Video Analytics, in preview) delivers the developer platform for video analytics, and&nbsp;</SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" title="Azure Applied AI - Video Analyzer for Media" href="https://gorovian.000webhostapp.com/?exam=t5/azure-media-services/new-name-and-a-wealth-of-new-capabilities-in-video-indexer-now/ba-p/2305908?WT.mc_id=aiml-27332-ayyonet" target="_blank" rel="noopener noreferrer"><SPAN class="TextRun Underlined SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0" data-ccp-charstyle="Hyperlink">Azure Video Analyzer for Media</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;(</SPAN><SPAN class="NormalTextRun ContextualSpellingAndGrammarErrorV2 SCXW3074855 BCX0">formerly</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;Video Indexer, generally available) delivers AI solutions targeted at Media &amp; Entertainment scenarios.&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">With Azure Video Analyzer, you can process live</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">video at the&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">e</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">dge for high latency, record video</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">in the cloud</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">or record relevant video clips</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">on the edge</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">for limited bandwidth deployments. You can analyze video with AI of your choice by leveraging&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">services</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;such as Cognitive Services</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">Custom</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">Vision</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">and</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">S</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">patial</SPAN></SPAN></A><A class="Hyperlink SCXW3074855 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">A</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">nalysis</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">,</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">open</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;s</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">ource</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">models,</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0"> partner offers such as&nbsp;<A href="#" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="13">Intel OpenVINO</A>&nbsp;and </SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">models</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0"> </SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">or just your own custom-built models.</SPAN></SPAN><SPAN class="TextRun Highlight Underlined SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0" data-ccp-charstyle="normaltextrun" data-ccp-charstyle-defn="{&quot;ObjectId&quot;:&quot;9dd3fff8-a7cc-475e-b3ec-140b179812ec|146&quot;,&quot;ClassId&quot;:1073872969,&quot;Properties&quot;:[134233614,&quot;true&quot;,201340122,&quot;1&quot;,469775450,&quot;normaltextrun&quot;,469778129,&quot;normaltextrun&quot;,469778324,&quot;Default Paragraph Font&quot;]}">&nbsp;</SPAN></SPAN><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">With&nbsp;</SPAN></SPAN><A class="Hyperlink SCXW3074855 BCX0" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0" data-ccp-charstyle="Hyperlink">Azure Video Analyzer for Media,</SPAN></SPAN></A><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;your video</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;and audio</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">files&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">are easily processed&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">through a&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">rich set of out-of-the-box</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;machine</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;learning models all&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">pre-</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">integrated</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;together, in different channels of the&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">content;&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">with&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">V</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">ision to detect people and scenes, Language to</SPAN><SPAN class="NormalTextRun CommentStart SCXW3074855 BCX0">&nbsp;get insights and segment timestamps, Speech to provide close captioning</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;so&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">you</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;can quickly extract insights from&nbsp;</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">your</SPAN><SPAN class="NormalTextRun SCXW3074855 BCX0">&nbsp;libraries.</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0"><SPAN class="TextRun MacChromeBold SCXW154724764 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW154724764 BCX0"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Azure Video Analyer Taxonomy.png" style="width: 480px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283443i33159980B2466512/image-size/large?v=v2&amp;px=999" role="button" title="Azure Video Analyer Taxonomy.png" alt="Azure Video AI Solutions" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Azure Video AI Solutions</span></span></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P><SPAN class="TextRun SCXW45310458 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW45310458 BCX0"><SPAN class="TextRun BCX0 SCXW247230911" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW247230911"><SPAN><SPAN class="TextRun SCXW173097167 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW173097167 BCX0"><SPAN class="TextRun MacChromeBold SCXW154724764 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW154724764 BCX0"><SPAN class="TextRun SCXW3074855 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW3074855 BCX0"><SPAN class="TrackedChange BCX0 SCXW100317252"><SPAN class="TextRun BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252">Organizations across the globe are already using Azure Video Analyzer to optimize</SPAN></SPAN></SPAN><SPAN class="TrackedChange BCX0 SCXW100317252"><SPAN class="TextRun BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252">&nbsp;various&nbsp;</SPAN></SPAN></SPAN><SPAN class="TextRun BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252">processes</SPAN><SPAN class="NormalTextRun BCX0 SCXW100317252">&nbsp;such</SPAN><SPAN class="NormalTextRun BCX0 SCXW100317252">&nbsp;as&nbsp;</SPAN></SPAN><A class="Hyperlink BCX0 SCXW100317252" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252" data-ccp-charstyle="Hyperlink">Lufthansa</SPAN></SPAN></A><SPAN class="TextRun BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252">&nbsp;to improve flight turnaround times and&nbsp;</SPAN></SPAN><A class="Hyperlink BCX0 SCXW100317252" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252" data-ccp-charstyle="Hyperlink">Dow</SPAN></SPAN></A><SPAN class="TextRun BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252">&nbsp;to enhance workplace safety with leak detection. Media and Entertainment organizations such as&nbsp;</SPAN></SPAN><A class="Hyperlink BCX0 SCXW100317252" href="#" target="_blank" rel="noreferrer noopener"><SPAN class="TextRun Underlined BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252" data-ccp-charstyle="Hyperlink">MediaValet</SPAN></SPAN></A><SPAN class="TextRun BCX0 SCXW100317252" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW100317252">&nbsp;use Azure Video Analyzer to extract more value out of their content by finding what they need quickly, scaling across millions of assets.</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P> <P>&nbsp;</P> <P><A title="Azure&nbsp;Applied AI - Metrics Advisor" href="#" target="_blank" rel="noopener"><STRONG><SPAN data-contrast="none">Azure&nbsp;Metrics Advisor</SPAN></STRONG></A><SPAN data-contrast="none">, generally available today, builds on top of Anomaly Detector and makes integration of data, root cause diagnosis,&nbsp;and&nbsp;customizing alerts fast and easy through a readymade visualization and customization UI.&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Samsung</SPAN></A><SPAN data-contrast="none">&nbsp;uses&nbsp;Azure Metrics Advisor. In the past, Samsung had relied on a rule-based system in monitoring the health of their Smart TV service. But rules can't cover all the scenarios&nbsp;and&nbsp;the</SPAN><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-contrast="none">previous&nbsp;approach&nbsp;generated lots of noise</SPAN><SPAN data-contrast="none">.&nbsp;</SPAN><SPAN data-contrast="none">Using&nbsp;Azure&nbsp;Metrics Advisor, Samsung&nbsp;created</SPAN><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-contrast="none">a new monitoring solution&nbsp;in China to enhance their previous system&nbsp;and&nbsp;to</SPAN><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-contrast="none">provide</SPAN><SPAN data-contrast="none">&nbsp;</SPAN><SPAN data-contrast="none">automated, granular root cause analysis&nbsp;that&nbsp;helps&nbsp;engineers locate issues and shorten the time to resolution.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:280,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">With the latest enhancement to&nbsp;</SPAN><A title="Azure Applied AI - Bot Services" href="#" target="_blank" rel="noopener"><SPAN>Azure Bot Services</SPAN></A><SPAN data-contrast="none">, it is&nbsp;easier to build, test, and publish text, speech, or telephony-based bots through an integrated development experience.&nbsp;Varun Nagalia, Director of&nbsp;Unilever&nbsp;HR Systems and Employee Experience Apps:&nbsp;“Una was built on the Microsoft Bot framework foundation. We used the Virtual Assistant templates to streamline the bot’s business logic and handling of user intent in an efficient way. We were able to leverage these templates, adapting them to our business needs and turn around nearly 40 global features in less than 12 months.”</SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Bot Service screenshot.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/283445i388F4779153506D9/image-size/large?v=v2&amp;px=999" role="button" title="Bot Service screenshot.png" alt="Azure Bot Service" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Azure Bot Service</span></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN data-contrast="none">Solving<STRONG> automation</STRONG>,<STRONG> accessibility</STRONG> and <STRONG>scalability</STRONG> problems with Applied AI Services is faster than ever. When&nbsp;a&nbsp;task is not solved with Applied AI Services, Cognitive Services are&nbsp;all purpose APIs that are available to developers to&nbsp;build&nbsp;their&nbsp;solutions.&nbsp;Check out&nbsp;Azure AI Build Session to learn more and see the demos in action.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:280,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">Read these blog posts&nbsp;for a deeper dive into the latest&nbsp;from&nbsp;these Azure Applied AI Services:&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:280,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <UL> <LI><STRONG>Join us at Microsoft Build 2021</STRONG>:&nbsp;<A title="Register for Microsoft Build 2021 AI Sessions" href="#" target="_blank" rel="noopener"><SPAN class="TrackChangeTextInsertion TrackedChange BCX0 SCXW69717"><SPAN class="TrackedChange BCX0 SCXW69717"><SPAN class="TextRun BCX0 SCXW69717" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW69717"><SPAN class="TrackChangeTextInsertion TrackedChange SCXW61354212 BCX0"><SPAN class="TextRun SCXW61354212 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW61354212 BCX0"><SPAN class="EOP TrackedChange SCXW46068932 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:280,&quot;335559740&quot;:240}">aka.ms/Register4Build2021</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></A></LI> <LI><SPAN class="TrackChangeTextInsertion TrackedChange BCX0 SCXW69717"><SPAN class="TrackedChange BCX0 SCXW69717"><SPAN class="TextRun BCX0 SCXW69717" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW69717"><SPAN class="TrackChangeTextInsertion TrackedChange SCXW61354212 BCX0"><SPAN class="TextRun SCXW61354212 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW61354212 BCX0"><SPAN class="EOP TrackedChange SCXW46068932 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:280,&quot;335559740&quot;:240}">Azure Bot Service Blog post:&nbsp;</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN><A title="Microsoft Build 2021 - Conversational AI update" href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-2021-conversational-ai-update/ba-p/2375203?WT.mc_id=aiml-27332-ayyonet" target="_blank" rel="noopener">aka.ms/AzureBotServiceBuild21</A></LI> <LI><SPAN class="TrackChangeTextInsertion TrackedChange BCX0 SCXW69717"><SPAN class="TrackedChange BCX0 SCXW69717"><SPAN class="TextRun BCX0 SCXW69717" data-contrast="none"><SPAN class="NormalTextRun BCX0 SCXW69717"><SPAN class="TrackChangeTextInsertion TrackedChange SCXW61354212 BCX0"><SPAN class="TextRun SCXW61354212 BCX0" data-contrast="none"><SPAN class="NormalTextRun SCXW61354212 BCX0"><SPAN class="EOP TrackedChange SCXW46068932 BCX0" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:280,&quot;335559740&quot;:240}">Azure Video Analyzer:&nbsp;</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></SPAN><A style="font-family: inherit; background-color: #ffffff;" title="Azure Video Analyzer Blog post" href="https://gorovian.000webhostapp.com/?exam=t5/azure-media-services/new-name-and-a-wealth-of-new-capabilities-in-video-indexer-now/ba-p/2305908?WT.mc_id=aiml-27332-ayyonet" target="_blank" rel="noopener">aka.ms/BuildBlogAVAM</A></LI> <LI><A style="font-family: inherit; background-color: #ffffff;" href="#" target="_blank" rel="noopener"><SPAN>See what others&nbsp;in the AI community&nbsp;saying about Applied AI Services.&nbsp;</SPAN></A><SPAN style="font-family: inherit;" data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:280,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI><STRONG>New to Azure AI?</STRONG>&nbsp;Check out these resources: <A title="Azure AI/ML Resources" href="#" target="_blank" rel="noopener">aka.ms/DevRel/CognitiveServices</A></LI> <LI><STRONG>Try Azure AI for Free</STRONG>:&nbsp;<A title="Try out Azure Applied AI Services for free Azure credit" href="#" target="_blank" rel="noopener">aka.ms/StartFreeAzureAI</A></LI> </UL> <P>&nbsp;</P> <P>&nbsp;</P> <P data-unlink="true">&nbsp;</P> Tue, 22 Jun 2021 01:45:53 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/accelerating-the-time-to-value-with-azure-applied-ai-services/ba-p/2377309 Yonet 2021-06-22T01:45:53Z Conversational AI at Build 2021 https://gorovian.000webhostapp.com/?exam=t5/azure-ai/conversational-ai-at-build-2021/ba-p/2381033 <P>We're looking forward to the kick of of Microsoft Build 2021, which starts tomorrow, and the chance to talk more about the investments we have been making across Conversational AI at Microsoft.&nbsp; We will have a detailed article here on Microsoft Tech Community on Day 1 of Build - talking about how you can get started with building bots faster than ever before, collaborating with anyone across your organization - and below is a summary of our other content at Microsoft Build 2021.&nbsp;</P> <P>&nbsp;</P> <P><A class="session-block__link dark-theme" href="#" target="_blank" rel="noopener" aria-label="Build intelligent applications infused with world-class AI" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Build intelligent applications infused with world-class AI&quot;,&quot;cN&quot;:&quot;Build intelligent applications infused with world-class AI - Eric Boyd,Gary Pretty,Ayşegül Yönet&quot;}" data-cy="session-block-link"><STRONG>Build intelligent applications infused with world-class AI</STRONG></A></P> <DIV class="session-details-header-banner__date"><STRONG>Tuesday, May 25 | 2:30pm - 3:00pm PST</STRONG></DIV> <DIV class="session-details-header-banner__date"><STRONG>Wednesday, May 26 | 6:30am - 7:00am PST</STRONG><BR /><SPAN>Create breakthrough experiences in your applications with industry-leading AI. Join Eric Boyd, Corporate Vice President, Azure AI, as he demonstrates the latest innovations. Learn how Azure is simplifying the developer experience and enabling you to harness the power of AI in your mission-critical applications.</SPAN><BR /><BR /></DIV> <DIV class="session-details-header-banner__date"><SPAN><SPAN><A class="session-block__link dark-theme" href="#" target="_blank" rel="noopener" aria-label="Ask the Experts: Build intelligent applications infused with world-class AI" data-m="{&quot;aN&quot;:&quot;Catalog&quot;,&quot;id&quot;:&quot;Catalog - Ask the Experts: Build intelligent applications infused with world-class AI&quot;,&quot;cN&quot;:&quot;Ask the Experts: Build intelligent applications infused with world-class AI - Alon Bochman,Andrew Clear,Erin Dormier,Anand Raman&quot;}" data-cy="session-block-link"><STRONG>Ask the Experts: Build intelligent applications infused with world-class AI</STRONG></A><BR /></SPAN></SPAN> <DIV class="session-details-header-banner__date"><STRONG>Tuesday, May 25 | 3:30pm - 4:00pm PST</STRONG></DIV> </DIV> <DIV class="session-details-header-banner__date"><SPAN>Learn how Azure enables you to create breakthrough experiences in your applications with industry-leading AI. Join the Azure AI product team to learn more about the latest innovations, and get your questions answered during this live Q&amp;A.</SPAN></DIV> <DIV class="session-details-header-banner__date">&nbsp;</DIV> <P><A href="#" target="_blank" rel="noopener"><STRONG>Extend your Power Virtual Agents with Bot Framework Composer</STRONG></A><BR /><STRONG>On Demand</STRONG><BR /><SPAN>Enhance your Power Virtual Agents bot by developing custom dialogs with Bot Framework Composer and publishing them directly to your Power Virtual Agents bot. In this session, we will use Composer to extend your Power Virtual Agents bot with Bot Framework functionality, including: 1. Adaptive dialogs 2. Language Generation (LG) 3. Adaptive Cards<BR /><BR /><STRONG><A href="#" target="_blank" rel="noopener">Azure AI Roundtable: Come discuss what's new and our future vision</A></STRONG><BR /></SPAN></P> <DIV class=""><STRONG>Wednesday, May 26 | 3:00pm - 4:00pm PST</STRONG></DIV> <DIV class="desc detail-item"> <P class="">Create breakthrough experiences in your applications with Azure AI. From improvements in AI quality, to cross-lingual custom neural voices and document translation, we have introduced many new capabilities. In this roundtable, we'll discuss the latest innovations in Conversational AI, Vision, Speech, Language, and Decision, as well as our future vision for Azure AI. We look forward to hearing your feedback and your questions.</P> </DIV> <P>&nbsp;</P> <P><SPAN><STRONG>Sign up for 1:1 Consultation with the Conversational AI team</STRONG><BR />As part of the Microsoft Build Conference (May 25-27) this week, we are offering free 1:1 customer consultation sessions with experts in our Microsoft Conversational AI product engineering team. After you register for the conference, you can sign up for these consultations at <A href="#" target="_blank" rel="noopener">https://mybuild.microsoft.com/app-consult</A>, choose Azure Conversational AI, and let us know more about your scenarios and challenges. We look forward to talking with you.</SPAN></P> Mon, 24 May 2021 15:54:46 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/conversational-ai-at-build-2021/ba-p/2381033 GaryPrettyMsft 2021-05-24T15:54:46Z Microsoft Azure AI and Data Fundamentals Specializations are now live on Coursera https://gorovian.000webhostapp.com/?exam=t5/azure-ai/microsoft-azure-ai-and-data-fundamentals-specializations-are-now/ba-p/2376509 <P><SPAN data-contrast="auto"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="MS COursera.PNG" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/282705i7229094E9CF0FF04/image-size/large?v=v2&amp;px=999" role="button" title="MS COursera.PNG" alt="MS COursera.PNG" /></span></SPAN></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">In April, we </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">announced </SPAN></A><SPAN data-contrast="auto">our first ever</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none"> Azure Fundamentals</SPAN></A><SPAN data-contrast="auto">&nbsp;specialization on Coursera’s platform. Today, we are excited&nbsp;that&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">AI Fundamentals</SPAN></A><SPAN data-contrast="auto">&nbsp;and&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Data Fundamentals</SPAN></A><SPAN data-contrast="auto">&nbsp;are now&nbsp;live&nbsp;on Coursera&nbsp;and a part of the&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Microsoft Azure Learning Collection</SPAN></A><SPAN data-contrast="auto">.&nbsp;These new&nbsp;Specializations&nbsp;come amid rapid cloud&nbsp;adoption&nbsp;and&nbsp;offer students a&nbsp;strong set of skills that are in high demand in today’s workforce.&nbsp;In fact, according to&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Burning Glass</SPAN></A><SPAN data-contrast="auto">, the number of Azure-related jobs is projected to grow&nbsp;38 percent&nbsp;over the next ten years.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="auto">What skilling content is included in the new&nbsp;Microsoft Azure&nbsp;AI and Data Specializations?</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="auto">Microsoft AI Fundamentals</SPAN></STRONG><SPAN data-contrast="auto">: In this&nbsp;Specialization, students can&nbsp;gain foundational knowledge about core artificial intelligence (AI) concepts and become familiar with services in Microsoft Azure that can be used to create AI solutions.&nbsp;Upon completing the&nbsp;Specialization, students will be prepared for the&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Microsoft Certified AI-900 Azure Fundamentals Exam</SPAN></A><SPAN data-contrast="auto">.&nbsp;Here is a preview of&nbsp;the courses&nbsp;offered in the&nbsp;AI Fundamentals&nbsp;Specialization:</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335551550&quot;:1,&quot;335551620&quot;:1,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="1" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><I><SPAN data-contrast="auto">Artificial Intelligence:</SPAN></I><SPAN data-contrast="auto">&nbsp;learn the key AI concepts of machine learning, anomaly detection, computer vision, natural language processing, and conversational AI.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> </OL> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="1" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><I><SPAN data-contrast="auto">Machine Learning:</SPAN></I><SPAN data-contrast="auto">&nbsp;learn how to use Azure Machine Learning to create and publish models without writing code.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="1" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><I><SPAN data-contrast="auto">Computer Vision:</SPAN></I><SPAN data-contrast="auto">&nbsp;learn how to use the Computer Vision service to analyze images.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="1" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"><I><SPAN data-contrast="auto">Natural Language Processing:&nbsp;</SPAN></I><SPAN data-contrast="auto">learn how to use the Text Analytics service for advanced natural language processing of raw text for sentiment analysis, key phrase extraction, named entity recognition, and language detection.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="1" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"><I><SPAN data-contrast="auto">Preparing for AI-900 Microsoft Azure AI Fundamentals Exam:</SPAN></I><SPAN data-contrast="auto">&nbsp;test your knowledge in a series of practice exams mapped to all the main topics covered in the AI-900 exam, ensuring&nbsp;you are&nbsp;well prepared for certification success.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="auto">Microsoft Data Fundamentals</SPAN></STRONG><SPAN data-contrast="auto">: In this Specialization, students can learn directly from Microsoft Azure experts about the core database concepts in a cloud environment. &nbsp;&nbsp;Students will&nbsp;build foundational knowledge of&nbsp;cloud&nbsp;data services within Microsoft Azure, and prepare for the&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Microsoft Data Fundamentals DP-900 exam</SPAN></A><SPAN data-contrast="auto">. &nbsp;Here is a preview of the courses offered in the Data Fundamentals Specialization:</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="2" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><I><SPAN data-contrast="auto">Explore Core Data Concepts:</SPAN></I><SPAN data-contrast="auto">&nbsp;learn the fundamentals of database concepts in a cloud environment, get basic skilling in cloud data services, and build your foundational knowledge of cloud data&nbsp;services.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> </OL> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="2" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><I><SPAN data-contrast="auto">Azure SQL:</SPAN></I><SPAN data-contrast="auto">&nbsp;learn about SQL&nbsp;and see how&nbsp;it is&nbsp;used to query and&nbsp;maintain&nbsp;data in a database, and the different dialects that are available.</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></LI> </OL> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="2" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><I><SPAN data-contrast="auto">Azure Cosmos DB:</SPAN></I><SPAN data-contrast="auto">&nbsp;explore non-relational data offerings, provisioning and deploying non-relational databases, and non-relational data stores with Microsoft Azure.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="2" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"><I><SPAN data-contrast="auto">Modern Data Warehouse Analytics:</SPAN></I><SPAN data-contrast="auto">&nbsp;learn about the processing options available for building data analytics solutions in Azure and explore Synapse Analytics, Databricks, and HDInsight.</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> <OL> <LI data-leveltext="%1." data-font="Calibri" data-listid="2" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"><I><SPAN data-contrast="auto">Preparing for&nbsp;DP-900 Microsoft Azure AI Fundamentals Exam:</SPAN></I><SPAN data-contrast="auto">&nbsp;test your knowledge in a series of practice exams mapped to all the main topics covered in the&nbsp;DP-900 exam, ensuring you are well prepared for certification success.</SPAN><SPAN>&nbsp;<BR /></SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </OL> <P><STRONG>Scholarship opportunities for Women in Cloud</STRONG></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">As a part of this initiative, we are&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">teaming up with Women in Cloud</SPAN></A><SPAN data-contrast="auto"> to offer their members over 600 scholarships to all three specializations and the respective certification exam vouchers at no cost. </SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><I><SPAN data-contrast="auto">“Digital skilling is becoming a critical necessity in today’s world and women are struggling to get access to career building education. That is why we are excited to partner with Microsoft and Coursera on this special initiative,” said </SPAN></I><A href="#" target="_blank" rel="noopener"><I><SPAN data-contrast="none">Women in Cloud President Chaitra Vedullapalli</SPAN></I></A><SPAN data-contrast="auto"> </SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto"> </SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Applicants who are accepted into this program will enroll in and complete the three Azure digital skilling Specializations via the Coursera platform: Azure Fundamentals, AI Fundamentals and Data Fundamentals. What’s&nbsp;more, scholars who complete these trainings will also receive a certification voucher for each Specialization to take the respective exams for free.&nbsp;</SPAN><SPAN data-contrast="auto">In addition to free Azure skilling, scholars can also expect&nbsp;career&nbsp;support from the Women in Cloud community. Microsoft, Women&nbsp;in Cloud, and Coursera&nbsp;are committed to equipping women interested in upskilling and contributing to an&nbsp; </SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">inclusive,&nbsp;skills-based&nbsp;economy.</SPAN><SPAN data-contrast="none"> </SPAN></A><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Enroll in the&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">AI Fundamentals</SPAN></A><SPAN data-contrast="auto">&nbsp;and&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Data Fundamentals</SPAN></A><SPAN data-contrast="auto">&nbsp;Specializations today!&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;134233117&quot;:true,&quot;134233118&quot;:true,&quot;201341983&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> Fri, 21 May 2021 18:11:17 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/microsoft-azure-ai-and-data-fundamentals-specializations-are-now/ba-p/2376509 brendasaucedo 2021-05-21T18:11:17Z Azure Neural Text-to-Speech extended to support lip sync with viseme https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-neural-text-to-speech-extended-to-support-lip-sync-with/ba-p/2356748 <P><A href="#" target="_blank" rel="noopener">Neural Text-to-Speech</A> (Neural TTS), part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. One emerging solution area is to create an immersive virtual experience with an avatar that automatically animates its mouth movements to synchronize with the synthetic speech. Today, we introduce the new feature that allows developers to <SPAN>synchronize&nbsp;</SPAN>the mouth and face poses with TTS – the viseme events.</P> <P>&nbsp;</P> <H1>What is viseme</H1> <P>&nbsp;</P> <P>A viseme is the visual description of a phoneme in a spoken language. It defines the position of the face and the mouth when speaking a word.&nbsp;With the lip sync feature, developers can get the viseme sequence and its duration from generated speech for facial expression synchronization. Viseme can be used to control the movement of 2D and 3D avatar models, perfectly matching mouth movements to synthetic speech.</P> <P>&nbsp;</P> <P>Traditional avatar mouth movement requires manual frame-by-frame production, which requires long production cycles and high human labor costs.</P> <P>&nbsp;</P> <P>Viseme can generate the corresponding facial parameters according to the input text. It greatly expands the number of scenarios by making the avatar easier to use and control. Below are some example scenarios that can be augmented with the lip sync feature.&nbsp;</P> <UL> <LI><STRONG>Customer service agent:</STRONG> Create an animated virtual voice assistant for intelligent kiosks, building the multi-mode integrative services for your customers;</LI> <LI><STRONG>Newscast:</STRONG> Build immersive news broadcasts and make content consumption much easier with natural face and mouth movements;</LI> <LI><STRONG>Entertainment:</STRONG> Build more interactive gaming avatars and cartoon characters that can speak with dynamic content;</LI> <LI><STRONG>Education:</STRONG> Generate more intuitive language teaching videos that help language learners to understand the mouth behavior of each word and phoneme;</LI> <LI><STRONG>Accessibility:</STRONG> Help the hearing-impaired to pick up sounds visually and "lip-read" any speech content.</LI> </UL> <P>&nbsp;</P> <H1>How viseme works with Azure neural TTS</H1> <P>&nbsp;</P> <P>The viseme turns the input text or SSML (Speech Synthesis Markup Language) into Viseme ID and Audio offset which are used to represent the key poses in observed speech, such as the position of the lips, jaw and tongue when producing a particular phoneme. With the help of a 2D or 3D rendering engine, you can use the viseme output to control the animation of your avatar.</P> <P>&nbsp;</P> <P>The overall workflow of viseme is depicted in the flowchart below.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Yueying_Liu_0-1621061769441.jpeg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/280823i66762CAF499A9E23/image-size/large?v=v2&amp;px=999" role="button" title="Yueying_Liu_0-1621061769441.jpeg" alt="Yueying_Liu_0-1621061769441.jpeg" /></span></P> <P>The underlying technology for the Speech viseme feature consists of three components: Text Analyzer, TTS Acoustic Predictor, and TTS Viseme Generator.</P> <P>&nbsp;</P> <P>To generate the viseme output for a given text, the text or SSML is first input into the Text Analyzer, which analyzes the text and provides output in the form of phoneme sequence. A phoneme is a basic unit of sound that distinguishes one word from another in a particular language. A sequence of phonemes defines the pronunciations of the words provided in the text.</P> <P>&nbsp;</P> <P>Next, the phoneme sequence goes into the TTS Acoustic Predictor and the start time of each phoneme is predicted.</P> <P>&nbsp;</P> <P>Then, the TTS Viseme generator maps the phoneme sequence to the viseme sequence and marks the start time of each viseme in the output audio. Each viseme is represented by a serial number, and the start time of each viseme is represented by an audio offset. Often several phonemes correspond to a single viseme, as several phonemes look the same on the face when pronounced, such as&nbsp;‘s’,&nbsp;‘z’.</P> <P>&nbsp;</P> <P>Here is an example of the viseme output.</P> <P class="lia-indent-padding-left-30px"><FONT color="#333399"><EM>(Viseme), Viseme ID: 1, Audio offset: 200ms.</EM></FONT></P> <P class="lia-indent-padding-left-30px"><FONT color="#333399"><EM>(Viseme), Viseme ID: 5, , Audio offset: 850ms.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</EM></FONT></P> <P class="lia-indent-padding-left-30px"><FONT color="#333399"><EM>……</EM></FONT></P> <P class="lia-indent-padding-left-30px"><FONT color="#333399"><EM>(Viseme), Viseme ID: 13, Audio offset: 2350ms.</EM></FONT></P> <P>&nbsp;</P> <P>This feature is built into the Speech SDK. With just a few lines of code, you can easily enable facial and mouth animation using the viseme events together with your TTS output.</P> <P>&nbsp;</P> <H1>How to use the viseme</H1> <P>&nbsp;</P> <P>To enable viseme, you need to subscribe to the&nbsp;VisemeReceived&nbsp;event in <A href="#" target="_blank" rel="noopener">Speech SDK</A> (The TTS REST API doesn’t support viseme). The following snippet illustrates how to subscribe to the viseme event in C#. Viseme only supports English (United States) neural voices at the moment but will be extended to support more languages later.</P> <P>&nbsp;</P> <LI-CODE lang="csharp">using (var synthesizer = new SpeechSynthesizer(speechConfig, audioConfig)) { // Subscribes to viseme received event synthesizer.VisemeReceived += (s, e) =&gt; { Console.WriteLine($"Viseme event received. Audio offset: " + $"{e.AudioOffset / 10000}ms, viseme id: {e.VisemeId}."); }; var result = await synthesizer.SpeakSsmlAsync(ssml)); }</LI-CODE> <P>&nbsp;</P> <P>After obtaining the viseme output, you can use these outputs to drive character animation. &nbsp;You can build your own characters and automatically animate the characters.</P> <P>&nbsp;</P> <P>For 2D characters, you can design a character that suits your scenario and use Scalable Vector Graphics (SVG) for each viseme ID to get a time-based face position.&nbsp; With temporal tags provided by viseme event, these well-designed SVGs will be processed with smoothing modifications, and provide robust animation to the users. For example, below illustration shows a red lip character designed for language learning. Try the <A href="#" target="_blank" rel="noopener">red lip animation experience in Bing Translator</A>, and learn more about how visemes are used to demonstrate the correct pronunciations for words.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Yueying_Liu_1-1621061769468.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/280822i9A4558785F7D8D18/image-size/large?v=v2&amp;px=999" role="button" title="Yueying_Liu_1-1621061769468.png" alt="Yueying_Liu_1-1621061769468.png" /></span></P> <P>&nbsp;</P> <P>For 3D characters, think of the characters as string puppets. The puppet master pulls the strings from one state to another and the laws of physics will do the rest and drive the puppet to move fluidly. The Viseme output acts as a puppet master to provide an action timeline. The animation engine defines the physical laws of action. By interpolating frames with easing algorithms, the engine can further generate high-quality animations.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Yueying_Liu_2-1621061769510.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/280824iBF54497AF5E1ADA3/image-size/large?v=v2&amp;px=999" role="button" title="Yueying_Liu_2-1621061769510.png" alt="Yueying_Liu_2-1621061769510.png" /></span></P> <P><FONT size="2">&nbsp;(Note: the c<FONT face="inherit">haracter </FONT>image<FONT face="inherit">&nbsp;in this example is from Mixamo.)</FONT></FONT></P> <P>&nbsp;</P> <P data-unlink="true">Learn more about how to use the viseme feature to enable text-to-speech animation with the tutorial video&nbsp;below.</P> <P><LI-VIDEO vid="https://www.youtube.com/watch?v=ui9XT47uwxs" align="center" size="medium" width="400" height="225" uploading="false" thumbnail="https://i.ytimg.com/vi/ui9XT47uwxs/hqdefault.jpg" external="url"></LI-VIDEO></P> <P>&nbsp;</P> <H1>Get started&nbsp;</H1> <P>&nbsp;</P> <P>With the viseme feature, Azure neural TTS expands its support for more scenarios and enables developers to create an immersive virtual experience with automatic lip sync to synthetic speech.&nbsp;</P> <P>&nbsp;</P> <P>Let us know how you are using or plan to use Neural TTS voices in this&nbsp;<A href="#" target="_blank" rel="noopener noreferrer">form</A>. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing about your experience and developing more compelling services together with you for the developers around the world.</P> <P>&nbsp;</P> <P><SPAN><A href="#" target="_blank" rel="noopener">See our documentation for Viseme</A></SPAN></P> <P><A href="#" target="_blank" rel="noopener">Add voice to your app in 15 minutes</A></P> <P><A href="#" target="_blank" rel="noopener">Build a voice-enabled bot</A></P> <P><A href="#" target="_blank" rel="noopener">Deploy Azure TTS voices on prem with-Speech Containers</A></P> <P><A href="#" target="_blank" rel="noopener">Build your custom voice</A></P> <P><A href="#" target="_blank" rel="noopener">Improve synthesis with the Audio Content Creation tool</A></P> <P><A href="#" target="_self">Visit our Speech page to explore more speech scenarios</A></P> <P>&nbsp;</P> Thu, 20 May 2021 06:29:47 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-neural-text-to-speech-extended-to-support-lip-sync-with/ba-p/2356748 Yueying_Liu 2021-05-20T06:29:47Z Build an application that transcribes speech https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-an-application-that-transcribes-speech/ba-p/2322484 <P>One of the most common ways to benefit from AI services in your apps is to utilize Speech to Text capabilities to tackle a range of scenarios, from providing captions for audio/video to transcribing phone conversations and meetings. Speech service, an Azure Cognitive Service, offers speech transcription via its Speech to Text API in over <A href="#" target="_blank" rel="noopener">94 language/locales</A> and growing.</P> <P>In this article we are going to show you how to integrate real-time speech transcription into a mobile app for a simple note taking scenario. Users will be able to record notes and have the transcript show up as they speak. Our <A href="#" target="_blank" rel="noopener">Speech SDK</A> supports a variety of operating systems and programming languages.&nbsp;Here we are going to write this application in Java to run on Android.</P> <P>&nbsp;</P> <H2>Common Speech To Text scenarios</H2> <P>The Azure Speech Service provides accurate Speech to Text capabilities that can be used for a wide range of scenarios. Here are some common examples:</P> <UL> <LI><STRONG>Audio/Video captioning</STRONG>. Create captions for audio and video content using either <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-speech-and-batch-ingestion/ba-p/2222539" target="_self">batch transcription</A>&nbsp;or realtime transcription.</LI> <LI><STRONG>Call Center Transcription and Analytics.&nbsp;</STRONG><A href="#" target="_self">Gain insights</A> from the interactions call center agents have with your customers by transcribing these calls and extracting insights from sentiment analysis, keyword extraction and more.</LI> <LI><STRONG>Voice Assistants</STRONG>.&nbsp;<SPAN>Voice assistants using the Speech service empowers developers to create natural, human-like conversational interfaces for their applications and experiences.&nbsp;You can add voice in and voice out capabilities to your flexible and versatile bot built using Azure Bot Service with the&nbsp;<A href="#" target="_blank" rel="noopener" data-linktype="relative-path">Direct Line Speech</A>&nbsp;channel, or leverage the simplicity of authoring a&nbsp;<A href="#" target="_blank" rel="noopener" data-linktype="relative-path">Custom Commands</A>&nbsp;app for straightforward voice commanding scenarios.</SPAN></LI> <LI><STRONG>Meeting Transcription</STRONG>. Microsoft Teams provides <A href="https://gorovian.000webhostapp.com/?exam=t5/microsoft-teams-blog/live-transcription-with-speaker-attribution-now-available-in/ba-p/2228817" target="_self">live meeting transcription with speaker attribution</A>&nbsp;that make meetings more accessible and easier to follow. This capability is powered by the&nbsp;Azure Speech Service.&nbsp;</LI> <LI><STRONG>Dictation</STRONG>. Microsoft Word provides the ability to <A href="#" target="_self">dictate your documents</A>&nbsp;powered by the Azure Speech Service.&nbsp;<SPAN>It's a quick and easy way to get your thoughts out, create drafts or outlines, and capture notes.&nbsp;</SPAN></LI> </UL> <P>&nbsp;</P> <H2>How to build real-time speech transcription into your mobile app</H2> <H3>Prerequisites</H3> <P>As a basis for our sample app we are going to use the “Recognize speech from a microphone in Java on Android” GitHub sample that can be found <A href="#" target="_blank" rel="noopener">here</A>. After cloning the <A href="#" target="_blank" rel="noopener">cognitive-services-speech-sdk</A> GitHub repo we can use <A href="#" target="_blank" rel="noopener">Android Studio</A> version 3.1 or higher to open the project under<EM> samples/java/android/sdkdemo</EM>. This repo also contains similar samples for various other operating systems and programming languages.</P> <P>In order to use the Azure Speech Service you will have to create a Speech service resource in Azure as described <A href="#" target="_blank" rel="noopener">here</A>. This will provide you with the subscription key for your resource in your chosen service region that you need to use in the sample app.</P> <P>The only thing you need to try out speech recognition with the sample app is to update the configuration for speech recognition by filling in your subscription key and service region at the top of the MainActivity.java source file:</P> <P>&nbsp;</P> <LI-CODE lang="java">// // Configuration for speech recognition // // Replace below with your own subscription key private static final String SpeechSubscriptionKey = "YourSubscriptionKey"; // Replace below with your own service region (e.g., "westus"). private static final String SpeechRegion = "YourServiceRegion";</LI-CODE> <P>&nbsp;</P> <P>You can leave the configuration for intent recognition as-is since we are just interested in the speech to text functionality here.</P> <P>After you have updated the configuration, you can build and run your sample. Ideally you run the application on an Android phone since you will need to have a microphone input.</P> <P>&nbsp;</P> <H3>Trying out the sample</H3> <P>On first use, the application will ask you for the needed application permissions. Then the sample application provides a few options for you to use. Since we want users to be able to capture a longer note we will use the <STRONG>Recognize continuously</STRONG> option.</P> <P>With this option the recognized text will show up at the bottom of the screen as you speak, and you can speak for a while with some longer pauses in between. Recognition will stop when you hit the stop button. So, this will allow you to capture a longer note.</P> <P>This is what you should see when you try out the application:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="STT-Sample.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277914i12F3DF4AD5E6482D/image-size/large?v=v2&amp;px=999" role="button" title="STT-Sample.jpg" alt="STT-Sample.jpg" /></span></P> <H3><BR />Code Walkthrough</H3> <P>Now that you have this sample working and you have tried it out, let’s look at the key portions of the code that are needed to get the transcript. These can all be found in the MainActivity.java source file.</P> <P>First in the <EM>onCreate</EM> function we need to ask for permission to access the microphone, internet, and storage:</P> <P>&nbsp;</P> <LI-CODE lang="java">int permissionRequestId = 5; // Request permissions needed for speech recognition ActivityCompat.requestPermissions(MainActivity.this, new String[]{RECORD_AUDIO, INTERNET, READ_EXTERNAL_STORAGE}, permissionRequestId);</LI-CODE> <P>&nbsp;</P> <P>Next, we need to create a <EM>SpeechConfig </EM>that provides the subscription key and region so we can access the speech service:</P> <P>&nbsp;</P> <LI-CODE lang="java">// create config final SpeechConfig speechConfig; try { speechConfig = SpeechConfig.fromSubscription(SpeechSubscriptionKey, SpeechRegion); } catch (Exception ex) { System.out.println(ex.getMessage()); displayException(ex); return; } </LI-CODE> <P>&nbsp;</P> <P>The main work to recognize the spoken audio is done in the<EM> recognizeContinuousButton </EM>function that gets invoked when the <STRONG>Recognize continuously</STRONG> button is pressed and the onClick event is triggered:</P> <P>&nbsp;</P> <LI-CODE lang="java">/////////////////////////////////////////////////// // recognize continuously /////////////////////////////////////////////////// recognizeContinuousButton.setOnClickListener(new View.OnClickListener() {</LI-CODE> <P>&nbsp;</P> <P>First a new recognizer is created providing information about the <EM>speechConfig</EM> we created earlier as well as the audio Input from the microphone:</P> <P>&nbsp;</P> <LI-CODE lang="java">audioInput = AudioConfig.fromStreamInput(createMicrophoneStream()); reco = new SpeechRecognizer(speechConfig, audioInput); </LI-CODE> <P>&nbsp;</P> <P>Besides getting the audio stream from a microphone you could also use audio from a file or other stream for example.<BR />Next two event listeners are registered. The first one is for the R<EM>ecognizing</EM> event which signals intermediate recognition results. These are generated as words are being recognized as a preliminary indication of the recognized text. The second one is the <EM>Recognized</EM> event which signals the completion of a recognition. These will be produced when a long enough pause in the speech is detected and indicate the final recognition result for that part of the audio.</P> <P>&nbsp;</P> <LI-CODE lang="java">reco.recognizing.addEventListener((o, speechRecognitionResultEventArgs) -&gt; { final String s = speechRecognitionResultEventArgs.getResult().getText(); Log.i(logTag, "Intermediate result received: " + s); content.add(s); setRecognizedText(TextUtils.join(" ", content)); content.remove(content.size() - 1); }); reco.recognized.addEventListener((o, speechRecognitionResultEventArgs) -&gt; { final String s = speechRecognitionResultEventArgs.getResult().getText(); Log.i(logTag, "Final result received: " + s); content.add(s); setRecognizedText(TextUtils.join(" ", content)); }); </LI-CODE> <P>&nbsp;</P> <P>Lastly recognition is started using <EM>startContinuousRecognitionAsync()</EM> and a stop button is displayed.</P> <P>&nbsp;</P> <LI-CODE lang="java">final Future&lt;Void&gt; task = reco.startContinuousRecognitionAsync(); setOnTaskCompletedListener(task, result -&gt; { continuousListeningStarted = true; MainActivity.this.runOnUiThread(() -&gt; { buttonText = clickedButton.getText().toString(); clickedButton.setText("Stop"); clickedButton.setEnabled(true); }); }); </LI-CODE> <P>&nbsp;</P> <P>When the stop button is pressed recognition is stopped by calling <SPAN>stopContinuousRecognitionAsync()</SPAN>:</P> <P>&nbsp;</P> <LI-CODE lang="java">if (continuousListeningStarted) { if (reco != null) { final Future&lt;Void&gt; task = reco.stopContinuousRecognitionAsync(); setOnTaskCompletedListener(task, result -&gt; { Log.i(logTag, "Continuous recognition stopped."); MainActivity.this.runOnUiThread(() -&gt; { clickedButton.setText(buttonText); }); enableButtons(); continuousListeningStarted = false; }); } else { continuousListeningStarted = false; } return; }</LI-CODE> <P>&nbsp;</P> <P>That is all that is needed to integrate Speech to Text into your application.</P> <P>&nbsp;</P> <P><STRONG>Next Steps:</STRONG></P> <UL> <LI>Take a look at the <A href="#" target="_blank" rel="noopener">Speech-to-text quickstart</A></LI> <LI>Note that there are default limits to the number of concurrent recognitions. See <A href="#" target="_blank" rel="noopener">Speech service Quotas and Limits - Azure Cognitive Services </A>for information on these limits, how to increase them and best practices.</LI> <LI>Take a look at supported spoken languages</LI> <LI>Explore how to <A href="#" target="_blank" rel="noopener">improve accuracy with Custom Speech</A> by training a custom model that is adapted to the words and phrases used in your scenario.</LI> <LI>Learn more about other things you can do with&nbsp;<A href="#" target="_blank" rel="noopener">Speech service</A>.</LI> </UL> Fri, 14 May 2021 01:56:19 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-an-application-that-transcribes-speech/ba-p/2322484 HeikoRa 2021-05-14T01:56:19Z Sharpen your skills with new Azure AI Fundamentals free course on Udacity https://gorovian.000webhostapp.com/?exam=t5/azure-ai/sharpen-your-skills-with-new-azure-ai-fundamentals-free-course/ba-p/2345833 <P>In 2020, Artificial Intelligence (AI) Specialist was named the <A href="#" target="_blank" rel="noopener">top emerging job</A>, and according to Forbes, we are likely to witness an even more <A href="#" target="_blank" rel="noopener">accelerated adoption of AI</A> over the next year. <A href="#" target="_blank" rel="noopener">Microsoft and Udacity have collaborated</A> in the past to bring in-demand Azure skilling opportunities. Today, we are excited to announce the new <A href="#" target="_self">AI Fundamentals free course</A> on Udacity.</P> <P>&nbsp;</P> <P><STRONG>What to expect from AI Fundamentals</STRONG></P> <P>AI Fundamentals offers learners a basic foundational understanding of machine learning (ML) and AI concepts. This course also prepares learners to implement ML and AI workloads using Azure. There is no prerequisite for the course as AI Fundamentals is intended for learners with both technical and non-technical backgrounds.</P> <P>&nbsp;</P> <P>Upon completing the AI Fundamentals free course, learners will have a foundational understanding of the following:</P> <P>&nbsp;</P> <UL> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; AI workloads and considerations</LI> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Fundamental principles of ML on Azure</LI> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Computer vision workloads on Azure</LI> <LI>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Natural Language Processing (NLP) workloads on Azure&nbsp; &nbsp; &nbsp;</LI> </UL> <P>&nbsp;</P> <P>While Udacity enables students to learn at their own pace, the AI Fundamentals course can be completed in as little as one month at 20 hours a week.</P> <P>&nbsp;</P> <P><STRONG>You completed the AI Fundamentals course, now what? </STRONG></P> <P>After completing this course with Udacity, learners will be prepared to take the <A href="#" target="_blank" rel="noopener">AI-900 exam</A> and become certified in Microsoft Azure AI Fundamentals. Whether a learner is looking for new roles or wanting to upskill in their current role, being adept in Microsoft Azure and having a certification to prove it is a <A href="#" target="_blank" rel="noopener">competitive advantage</A>. In fact, according to the <A href="#" target="_blank" rel="noopener">Value of IT Certification Survey by Pearson Vue</A>, almost thirty-five percent of technical professionals said getting certified led to salary or wage increases, and twenty-six percent reported job promotions.</P> <P>&nbsp;</P> <P>While the AI Fundamentals course and the AI-900 certification exam will teach and test learners on their foundational understanding of AI on Azure, learners who are interested in furthering their learning can enroll in the <A href="#" target="_blank" rel="noopener">Machine Learning Engineer for Microsoft Azure </A><SPAN>Nanodegree</SPAN> program with Udacity.</P> <P>&nbsp;</P> <P>Enroll in the&nbsp;<A href="#" target="_self">AI Fundamentals free course</A>&nbsp;on Udacity today!</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Tue, 11 May 2021 17:51:26 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/sharpen-your-skills-with-new-azure-ai-fundamentals-free-course/ba-p/2345833 brendasaucedo 2021-05-11T17:51:26Z Add voice activation to your product with Custom Keyword https://gorovian.000webhostapp.com/?exam=t5/azure-ai/add-voice-activation-to-your-product-with-custom-keyword/ba-p/2332961 <P>Voice activation enables your end-users to interact with your product completely hands-free. With products that are ambient in nature, like smart speakers, users can say a specific keyword to have the product respond with just their voice. This type of end-to-end voice-based experience can be achieved with keyword recognition technology, which is designed with multiple stages that span across the edge and cloud:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="multi-stage.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278884iAAA5881374C96A75/image-size/large?v=v2&amp;px=999" role="button" title="multi-stage.png" alt="multi-stage.png" /></span></P> <P>&nbsp;</P> <P>Custom Keyword allows you to create on-device keyword recognition models that are unique and personalized to your brand. The models will process incoming audio for your customized keyword and let your product respond to the end-user when the keyword is detected. When integrating your models with the Speech SDK, and Direct Line Speech or Custom Commands, you automatically get the benefits of the Keyword Verification service. Keyword Verification reduces the impact of false accepts from on-device models with robust models running on Azure.</P> <P>&nbsp;</P> <P>When creating on-device models with Custom Keyword, there is no need for you to provide any training data. Our latest <A href="#" target="_blank" rel="noopener">neural TTS</A> can generate audio in life-like quality and in diverse speakers with multi-speaker base models. Neural TTS is available in <A href="#" target="_self">60 locales and languages</A>. Custom Keyword makes use of this technology to generate training data specific to your keyword and specified pronunciations, eliminating the need for you to collect and provide training data.</P> <P>&nbsp;</P> <P>The most common use case of keyword recognition is with voice assistants. For example, "Hey Cortana" is the keyword for the Cortana assistant. Frictionless user experiences for voice assistants often require the use of microphones that are always listening and keyword recognition acts as a privacy boundary for the end-user. Sensitive and personal audio data can be processed completely on-device until the keyword is believed to be heard. Once this occurs, the gate to stream audio to the cloud for further processing can be opened. Cloud processing often includes both Speech-to-Text and Keyword Verification.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="keyword-verification-flow.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278885iD7235366C6AC7BC4/image-size/large?v=v2&amp;px=999" role="button" title="keyword-verification-flow.png" alt="keyword-verification-flow.png" /></span></P> <P>&nbsp;</P> <P>The Speech SDK provides seamless integration between the on-device keyword recognition models created using Custom Keyword and the Keyword Verification service such that you do not need to provide any configuration for the Keyword Verification service. It will work out-of-the-box.</P> <P>&nbsp;</P> <P>Let’s walk through how to create on-device keyword recognition models using Custom Keyword, with some tips along the way:</P> <P>&nbsp;</P> <OL> <LI>Go to the <A href="#" target="_blank" rel="noopener">Speech Studio</A> and Sign in or, if you do not yet have a speech subscription, choose <A href="#" target="_blank" rel="noopener">Create a subscription</A>.</LI> <LI>On the <A href="#" target="_blank" rel="noopener">Custom Keyword</A> portal, click <STRONG>New project</STRONG>. Provide a name for your project with an optional description. Select the language which best represents what you expect your end-users to speak in when saying the keyword.<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="custom-keyword-new-project.png" style="width: 572px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278898iA0C2D8C1ADEF5320/image-size/large?v=v2&amp;px=999" role="button" title="custom-keyword-new-project.png" alt="custom-keyword-new-project.png" /></span></LI> <LI>Select your newly created project from the list and click <STRONG>Train model</STRONG>. Provide a name for your model with an optional description. For the keyword, provide the word or short phrase you expect your end-users to say to voice activate your product.<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="custom-keyword-new-model.png" style="width: 569px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278904iEF1FE1B1279BB74D/image-size/large?v=v2&amp;px=999" role="button" title="custom-keyword-new-model.png" alt="custom-keyword-new-model.png" /></span><BR /><BR />Below are a few tips for choosing an effective keyword: <UL> <LI>It should take no longer than two seconds to say.</LI> <LI>Words of 4 to 7 syllables work best. For example, "Hey Computer" is a good keyword. Just "Hey" is a poor one.</LI> <LI>Keywords should follow common pronunciation rules specific to the native language of your end-users.</LI> <LI>A unique or even a made-up word that follows common pronunciation rules might reduce false positives. For example, "computerama" might be a good keyword</LI> </UL> </LI> <LI>Custom Keyword will automatically create candidate pronunciations for your keyword. Listen to each pronunciation by clicking the play button next to it. Unselect any pronunciations that do not match the pronunciation you expect your end-users to say. <P>&nbsp;</P> <P>Tip: It is important to be deliberate about the pronunciations you select to ensure the best accuracy characteristics. For example, choosing more pronunciations than needed can lead to higher false accept rates. Choosing too few pronunciations, where not all expected variations are covered, can lead to lower correct accept rates.<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="custom-keyword-pronunciations.png" style="width: 573px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278906iBB696A2E2898662C/image-size/large?v=v2&amp;px=999" role="button" title="custom-keyword-pronunciations.png" alt="custom-keyword-pronunciations.png" /></span></P> </LI> <LI> <P>Choose the type of model you would like to generate. To make your keyword recognition journey as effortless as possible, Custom Keyword allows you to create two types of models, both of which do not require you to provide any training data:</P> <UL> <LI> <P><STRONG>Basic</STRONG> – Basic models are designed to be used for demo or rapid prototyping purposes and can be created within just 15 minutes.</P> </LI> <LI> <P><STRONG>Advanced</STRONG> – Advanced models are designed to be used for product integration with improved accuracy characteristics. These models can take up to 48 hours to be created. Remember, you do not need to provide any training data! Advanced models leverage our Text-to-Speech technology to generate training data specific to your keyword and improve the model’s accuracy.</P> </LI> </UL> </LI> <LI> <P>Click <STRONG>Train</STRONG>, and your model will start training. Keep an eye out in your email as you will receive a notification once the model is trained. You can then download the model and integrate with the Speech SDK.</P> <P>&nbsp;</P> <P>Tip: You can also test the model directly within the Custom Keyword portal in your browser by using the <STRONG>Testing</STRONG> tab. Choose your model and click <STRONG>Record</STRONG>. You may have to provide microphone access permissions. Now you can say the keyword and see when the model has recognized it!<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="custom-keyword-test-model.png" style="width: 578px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278907i2EE13299A50CE621/image-size/large?v=v2&amp;px=999" role="button" title="custom-keyword-test-model.png" alt="custom-keyword-test-model.png" /></span></P> <P>&nbsp;</P> </LI> </OL> <P>For more information on how to use your newly created keyword recognition models with the Speech SDK, read <A href="#" target="_blank" rel="noopener">Create Keyword quickstart - Speech service - Azure Cognitive Services | Microsoft Docs</A>.</P> Thu, 06 May 2021 21:00:20 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/add-voice-activation-to-your-product-with-custom-keyword/ba-p/2332961 hasyashah 2021-05-06T21:00:20Z Azure Cognitive Search performance: Setting yourself up for success https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-cognitive-search-performance-setting-yourself-up-for/ba-p/2324037 <P>Performance tuning is often harder than it should be. To help make this task a little easier, the <A href="#" target="_blank" rel="noopener">Azure Cognitive Search</A> team recently released new benchmarks, documentation, and a solution that you can use to bootstrap your own performance tests. Together, these additions will give you a deeper understanding of performance factors, how you can meet your scalability and latency requirements, and help set you up for success in the long term.</P> <P>&nbsp;</P> <P>The goal of this blog post is to give you an overview of performance in Azure Cognitive Search and to point you to resources so you can explore the concept more deeply. We’ll walk through some of the key factors that determine performance in Azure Cognitive Search, show you some performance benchmarks and how you can run your own performance tests, and ultimately provide some tips on how you can diagnose and fix performance issues you might be experiencing.</P> <P>&nbsp;</P> <H2>Key Performance Factors in Azure Cognitive Search</H2> <P>First, it’s important to understand the factors that impact performance. We outline these factors in more depth in this <A href="#" target="_blank" rel="noopener">article</A> but at a high level, these factors can be broken down into three categories:</P> <UL> <LI><A href="#" target="_blank" rel="noopener">The types of queries you’re sending</A></LI> <LI><A href="#" target="_blank" rel="noopener">The size and schema of your index</A></LI> <LI><A href="#" target="_blank" rel="noopener">The size and tier of your search service</A></LI> </UL> <P>It’s also important to know that both queries and indexing operations compete for the same resources on your search service. Search services are heavily read-optimized to enable fast retrieval of documents. The bias towards query workloads makes indexing more computationally expensive. As a result, a high indexing load will limit the query capacity of your service.</P> <P>&nbsp;</P> <H2>Performance benchmarks</H2> <P>While every scenario is different and we always recommend running your own performance tests (see the next section), it’s helpful to have a benchmark for the performance you can expect. We have created <A href="#" target="_blank" rel="noopener">two sets of performance benchmarks</A> that represent realistic workloads that can help you understand how Cognitive Search might work in your scenario.</P> <P>&nbsp;</P> <P>These benchmarks cover two common scenarios we see from our customers:</P> <UL> <LI><STRONG>E-commerce search</STRONG> - this benchmark is based on a real customer, <A href="#" target="_self">CDON</A>,&nbsp;the Nordic region's largest online marketplace</LI> <LI><STRONG>Document search</STRONG> - this benchmark is based on queries against the <A href="#" target="_blank" rel="noopener">Semantic Scholar</A> dataset</LI> </UL> <P>The benchmarks will show you the range of performance you might expect based on your scenario, search service tier, and the number of replicas/partitions you have. For example, in the document search scenario which included 22 GB of documents, the maximum queries per second (QPS) we saw for different configurations of an S1 can be seen in the graph below:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="DerekLegenzoff_0-1620166702313.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278101i0053AE1528E1E681/image-size/large?v=v2&amp;px=999" role="button" title="DerekLegenzoff_0-1620166702313.png" alt="DerekLegenzoff_0-1620166702313.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>As you can see, the maximum QPS achieved tends to scale linearly with the number of replicas. In this case, there was enough data that adding an additional partition significantly improved the maximum QPS as well.</P> <P>&nbsp;</P> <P>You can see more details on this and other tests in the <A href="#" target="_blank" rel="noopener">performance benchmarks</A> document.</P> <P>&nbsp;</P> <H2>Running your own performance tests</H2> <P>Above all, it’s important to run your own performance tests to validate that your current setup meets your performance requirements. To make it easier to run your own tests, we created a solution containing all the assets needed for you to run scalable load tests. You can find those assets here: <A href="#" target="_self">Azure-Samples/azure-search-performance-testing</A>.</P> <P><BR />The solution assumes you have a search service with data already loaded into the search index. We provide a couple of default test strategies that you can use to run the performance test as well as instructions to help you tailor the test to your needs. The test will send a variety of queries to your search service based on a CSV file containing sample queries and you can tune the query volume based on your production requirements.</P> <P><BR /><A href="#" target="_self">Apache JMeter</A> is used to run the tests giving you access to industry standard tooling and a rich ecosystem of plugins. The solution also leverages <A href="#" target="_self">Azure DevOps</A> build pipelines and <A href="#" target="_self">Terraform</A> to run the tests and deploy the necessary infrastructure on demand. With this, you can scale to as many worker nodes as you need so you won’t be limited by the throughput of the performance testing solution.</P> <P>&nbsp;</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="DerekLegenzoff_1-1620166702328.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278102iEB56EB7308591BB9/image-size/large?v=v2&amp;px=999" role="button" title="DerekLegenzoff_1-1620166702328.png" alt="DerekLegenzoff_1-1620166702328.png" /></span></P> <P>&nbsp;</P> <P>After running the tests, you’ll have access to rich telemetry on the results. The test results are integrated with Azure DevOps and you can also download a dashboard from JMeter that allows you to see a range of statistics and graphs on the test results:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="DerekLegenzoff_2-1620166702362.jpeg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/278103iA2614DC636388AA7/image-size/large?v=v2&amp;px=999" role="button" title="DerekLegenzoff_2-1620166702362.jpeg" alt="DerekLegenzoff_2-1620166702362.jpeg" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Improving performance</H2> <P>If you find your current levels of performance aren’t meeting your needs, there are several different ways to improve performance. The first step to improve performance is understanding why your service isn’t performing as you expect. By turning on <A href="#" target="_blank" rel="noopener">diagnostic logging</A>, you can gain access to a rich set of telemetry about your search service—this is the same telemetry that Microsoft Azure engineers use to diagnose performance issues. Once you have diagnostic logs available, there’s step by step documentation on <A href="#" target="_blank" rel="noopener">how to analyze your performance</A>.</P> <P>&nbsp;</P> <P>Finally, you can check out the <A href="#" target="_blank" rel="noopener">tips for better performance</A> to see if there are any areas you can improve on.</P> <P>&nbsp;</P> <P>If you’re still not seeing the performance you expect, feel free to reach out to us at <A href="https://gorovian.000webhostapp.com/?exam=mailto:azuresearch_contact@microsoft.com" target="_self">azuresearch_contact@microsoft.com.</A></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> Tue, 04 May 2021 22:49:44 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-cognitive-search-performance-setting-yourself-up-for/ba-p/2324037 DerekLegenzoff 2021-05-04T22:49:44Z Solution Template for Deploying Azure ML Models to AKS Clusters via Azure DevOps https://gorovian.000webhostapp.com/?exam=t5/azure-ai/solution-template-for-deploying-azure-ml-models-to-aks-clusters/ba-p/2318919 <H2>Background and Overview</H2> <P>&nbsp;</P> <P data-unlink="true">Azure Machine Learning (AML) natively supports deploying a model as a web service on Azure Kubernetes Service (AKS). Based on the official AML documentation, deploying models to AKS offers the following benefits: Fast response time, Auto-scaling of the deployed service, Logging, Model data collection, Authentication, TLS termination, Hardware acceleration options such as GPU and field-programmable gate arrays (FPGA). <A href="#" target="_self">Please refer to the official documentation</A> for directions on using AML Python SDK, Azure CLI, or even Visual Studio Code to deploy models to AKS.&nbsp;</P> <P>&nbsp;</P> <P>This blog article, <A href="#" target="_blank" rel="noopener">as well as the accompanying GitHub repo</A>, demonstrates an alternative option, which offers significant flexibility in model deployment. In particular, this solution template helps enable the following use cases:</P> <P>&nbsp;</P> <OL> <LI>Enable multi-region deployment</LI> <LI>More flexibility in endpoint configuration and management</LI> <LI>Model agnostic--one endpoint can invoke several models, providing the required environment is built beforehand. One environment can be reused across several models</LI> <LI>Controlled rollout of model inference deployment</LI> <LI>Enable higher automation across various AML workspaces for CI/CD purposes</LI> <LI>The solution can be customized to retrieve models directly from Azure storage, without invoking AML workspace at all, providing further flexibility</LI> <LI>The solution can be modified to include use cases beyond model inferencing. Data engineering via AKS endpoint without any specified model is also possible.</LI> </OL> <P>&nbsp;</P> <P>Contributor:</P> <P><A href="#" target="_blank" rel="nofollow noopener">Han Zhang (Microsoft Data &amp; AI Cloud Solution Architect)</A></P> <P><A href="#" target="_blank" rel="nofollow noopener">Ganesh Radhakrishnan (Microsoft Senior App &amp; Infra Cloud Solution Architect)</A></P> <H2>&nbsp;</H2> <H2>Prerequisites</H2> <P>&nbsp;</P> <P>Before you proceed, please complete the following prerequisites:</P> <P>&nbsp;</P> <OL> <LI>Review and complete all modules in<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Azure Fundamentals</A><SPAN>&nbsp;</SPAN>course.</LI> <LI>An Azure<SPAN>&nbsp;</SPAN><STRONG>Resource Group</STRONG><SPAN>&nbsp;</SPAN>with<SPAN>&nbsp;</SPAN><STRONG>Owner</STRONG><SPAN>&nbsp;</SPAN><EM>Role</EM><SPAN>&nbsp;</SPAN>permission. All Azure resources will be deployed into this resource group.</LI> <LI>A<SPAN>&nbsp;</SPAN><STRONG>GitHub</STRONG><SPAN>&nbsp;</SPAN>Account to fork and clone this GitHub repository.</LI> <LI>An<SPAN>&nbsp;</SPAN><STRONG>Azure DevOps Services</STRONG><SPAN>&nbsp;</SPAN>(formerly Visual Studio Team Services) Account. You can get a free Azure DevOps account by accessing the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Azure DevOps Services</A><SPAN>&nbsp;</SPAN>web page.</LI> <LI>An<SPAN>&nbsp;</SPAN><STRONG>Azure Machine Learning</STRONG><SPAN>&nbsp;</SPAN>workspace. AML is an enterprise-grade machine learning service to build and deploy models faster. In this project, you will use AML to register and retrieve models.</LI> <LI>This project assumes readers/attendees are familiar with Azure Machine Learning, Git SCM, Linux Containers (<EM>docker engine</EM>), Kubernetes, DevOps (<EM>Continuous Integration/Continuous Deployment</EM>) concepts and developing Microservices in one or more programming languages. If you are new to any of these technologies, go thru the resources below. <UL> <LI><A href="#" target="_blank" rel="nofollow noopener">Build AI solutions with Azure Machine Learning</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Introduction to Git SCM</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Git SCM Docs</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Docker Overview</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Kubernetes Overview</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Introduction to Azure DevOps Services</A></LI> </UL> </LI> <LI>(Optional) Download and install<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Postman App</A>, a REST API Client used for testing the Web API's.</LI> </OL> <H2>&nbsp;</H2> <H2>Architecture Diagram</H2> <P>&nbsp;</P> <P>Here is the architecture diagram for this solution template:&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="architecture.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277546i1A0185892698F960/image-size/large?v=v2&amp;px=999" role="button" title="architecture.png" alt="architecture.png" /></span></P> <P>&nbsp;</P> <P>For easy and quick reference, readers can refer to the following online resources as needed.</P> <UL> <LI><A href="#" target="_blank" rel="nofollow noopener">Docker Documentation</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Kubernetes Documentation</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Helm 3.x Documentation</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Azure Kubernetes Service (AKS) Documentation</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Azure Container Registry Documentation</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Azure DevOps Documentation</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Azure Machine Learning Documentation</A></LI> </UL> <H2>&nbsp;</H2> <H2>Step by Step Instructions</H2> <P>&nbsp;</P> <H3>Set up Azure DevOps Project</H3> <P>&nbsp;</P> <OL> <LI>Go to<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Azure Devops website</A>, and set up a project named<SPAN>&nbsp;</SPAN><STRONG>AML_AKS_custom_deployment</STRONG><SPAN>&nbsp;</SPAN>(Substitute any name as you see fit.)</LI> </OL> <H3>&nbsp;</H3> <H3>Set up Project</H3> <P>&nbsp;</P> <OL> <LI>Go to Repos on the left side, and find<SPAN>&nbsp;</SPAN><STRONG>Import</STRONG><SPAN>&nbsp;</SPAN>under<SPAN>&nbsp;</SPAN><STRONG>Import a repository</STRONG></LI> <LI><STRONG><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="1.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277553i214DF5A2F992265D/image-size/large?v=v2&amp;px=999" role="button" title="1.png" alt="1.png" /></span></STRONG></STRONG> <P>&nbsp;Use&nbsp;<A style="background-color: #ffffff;" href="#" target="_blank" rel="noopener">https://github.com/HZ-MS-CSA/aml_aks_generic_model_deployment</A>&nbsp;as clone URL</P> </LI> </OL> <H3>&nbsp;</H3> <H3>Upload AML Model</H3> <P>&nbsp;</P> <P>As a demonstration, we will be using an onnx model from a Microsoft Cloud Workshop activity.</P> <OL> <LI>"This is a classification model for claim text that will predict<SPAN>&nbsp;</SPAN><CODE>1</CODE><SPAN>&nbsp;</SPAN>if the claim is an auto insurance claim or<SPAN>&nbsp;</SPAN><CODE>0</CODE><SPAN>&nbsp;</SPAN>if it is a home insurance claim. The model will be built using a type of Deep Neural Network (DNN) called the Long Short-Term Memory (LSTM) recurrent neural network using TensorFlow via the Keras library."<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Source here.</A></LI> <LI>For step by step guidance on how to create and train this model, please see the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">MCW workshop here.</A></LI> <LI>For your convenience, you can find the onnx model under sample_model/claim_classifer.zip</LI> <LI>Download and unzip the file, and upload the onnx model to Azure ML workspace<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="2.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277556i463A47EFCD1F73A9/image-size/large?v=v2&amp;px=999" role="button" title="2.png" alt="2.png" /></span></LI> </OL> <H3>&nbsp;</H3> <H3>Modify Azure DevOps Repo Content</H3> <P>&nbsp;</P> <P>There are two files that need to be modified to accommodate the onnx model</P> <OL> <LI><STRONG>./main-generic.py</STRONG>: This is essentially a scoring entry script that calls AML SDK, retrieve the model from the registry, and wrap it in a flask API. The original main-generic.py is a template, and you can add any relevant codes to execute the model in this file. Please replace the content of this file with<SPAN>&nbsp;</SPAN><STRONG>./sample_model/main-generic.py</STRONG><SPAN>&nbsp;</SPAN>(An example of how to customize this python script) <OL> <LI><STRONG>./sample_model/main-generic.py</STRONG><SPAN>&nbsp;</SPAN>is an adapted version of the original MCW-Cognitive services and deep learning Claim Classification Jupyter Notebook.<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Please see source here.</A></LI> </OL> </LI> <LI><STRONG>./project_env.yml</STRONG>: This specifies the dependencies required for the model to execute. Please replace the content of this file with<SPAN>&nbsp;</SPAN><STRONG>./sample_model/project_env.yml</STRONG><SPAN>&nbsp;</SPAN>(An example of how to customize this yml file)</LI> </OL> <H3>&nbsp;</H3> <H3>Set up Build Pipeline</H3> <P>&nbsp;</P> <OL> <LI>Create a pipeline by using the classical editor. Select your Azure Repos Git as source. Then start with an empty job.</LI> <LI>Change the agent specification as<SPAN>&nbsp;</SPAN><STRONG>ubuntu-18.04</STRONG><SPAN>&nbsp;</SPAN>(same for release pipeline as well)</LI> <LI><STRONG>Copy Files Activity</STRONG>: Configure the activity based on the screenshot below</LI> <LI><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="3.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277558i2122F2CE5C797F6F/image-size/large?v=v2&amp;px=999" role="button" title="3.png" alt="3.png" /></span> <P>&nbsp;<STRONG>Docker-Build an Image</STRONG><SPAN>: Configure the activity based on the notes and screenshot below</SPAN></P> <OL> <LI>Change task version to be 0.*</LI> <LI>Select an Azure container registry, and authorize Azure Devops's Azure connection</LI> <LI>In the "Docker File" section, select the<SPAN>&nbsp;</SPAN><STRONG>Dockerfile</STRONG><SPAN>&nbsp;</SPAN>in Azure Devops repo</LI> <LI>Leave everything else as default<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="4.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277562i73D617063A9266E7/image-size/large?v=v2&amp;px=999" role="button" title="4.png" alt="4.png" /></span> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="5.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277563i2868737FA31025D5/image-size/large?v=v2&amp;px=999" role="button" title="5.png" alt="5.png" /></span></P> </LI> </OL> </LI> <LI><STRONG style="font-family: inherit;">Docker-Push an Image</STRONG><SPAN style="font-family: inherit;">: Configure the activity based on the notes and screenshot below</SPAN> <OL> <LI>Change task version to be 0.*</LI> <LI>Select the same ACR as Build an Image step above</LI> <LI>Leave everything else as default<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="6.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277564iCD4B7F51A0176365/image-size/large?v=v2&amp;px=999" role="button" title="6.png" alt="6.png" /></span><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="7.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277565iEC93C624DCCC0008/image-size/large?v=v2&amp;px=999" role="button" title="7.png" alt="7.png" /></span><SPAN style="font-family: inherit;">&nbsp;</SPAN></LI> </OL> </LI> <LI> <P><STRONG>Publish Build Artifact</STRONG><SPAN>: Leave everything as default</SPAN></P> </LI> <LI><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="8.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277566iE26427703F358392/image-size/large?v=v2&amp;px=999" role="button" title="8.png" alt="8.png" /></span></SPAN></SPAN> <P>&nbsp;Save and queue the build pipeline.</P> </LI> </OL> <H3>&nbsp;</H3> <H3>Set up Release Pipeline</H3> <P>&nbsp;</P> <OL> <LI> <P>Start with an empty job</P> </LI> <LI> <P>Change Stage name to be AKS-Cluster-Release</P> </LI> <LI><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="9.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277570i595C2262873AB531/image-size/large?v=v2&amp;px=999" role="button" title="9.png" alt="9.png" /></span> <P><SPAN>Add build artifact</SPAN></P> </LI> <LI><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="10.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277571iC2156FDD223B23E2/image-size/large?v=v2&amp;px=999" role="button" title="10.png" alt="10.png" /></span></SPAN></SPAN> <P>Set up continuous deployment trigger--the release pipeline will be automatically kicked off every time a build pipeline is modified</P> </LI> <LI><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="11.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277572iE4A32FDF537F8888/image-size/large?v=v2&amp;px=999" role="button" title="11.png" alt="11.png" /></span></SPAN></SPAN> <P><STRONG>helm upgrade</STRONG>:<SPAN>&nbsp;</SPAN><EM>Package and deploy helm charts</EM><SPAN>&nbsp;</SPAN>activity.</P> <OL> <LI>Select an appropriate AKS cluster.&nbsp;<STRONG>When selecting the AKS cluster, ensure that AKS has access to the Azure Container Registry for Step 7, otherwise you will see a ImagePullBackOff error which results in timeout. You can also create a new AKS cluster through Azure Portal, and during the creation wizard process, associate your Azure Container Registry with the AKS cluster under the "Integration" tab.</STRONG></LI> <LI>Enter a custom namespace for this release. For this demo, the namespace is<SPAN>&nbsp;</SPAN><EM>aml-aks-onnx</EM></LI> <LI>Command is "upgrade"</LI> <LI>Chart type is "File path". Chart path is shown in the screenshot below</LI> <LI>Set release name as<SPAN>&nbsp;</SPAN><EM>aml-aks-onnx</EM></LI> <LI>Make sure to select<SPAN>&nbsp;</SPAN><STRONG>Install if not present</STRONG><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><STRONG>wait</STRONG></LI> <LI>Go to your Azure Container Registry, and find Login server URL. Your Image repository path is LOGIN_SERVER_URL/REPOSITORY_NAME.</LI> <LI>In arguments, enter the following content:&nbsp; <P><CODE>--create-namespace --set image.repository=IMAGE_REPOSITORY_PATH --set image.tag=$(Build.BuildId) --set amlargs.azureTenantId=$(TenantId) --set amlargs.azureSubscriptionId=$(SubscriptionId) --set amlargs.azureResourceGroupName=$(ResourceGroup) --set amlargs.azureMlWorkspaceName=$(WorkspaceName) --set amlargs.azureMlServicePrincipalClientId=$(ClientId) --set amlargs.azureMlServicePrincipalPassword=$(ClientSecret)</CODE></P> <span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="12.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277573i0BC8C1196F824344/image-size/large?v=v2&amp;px=999" role="button" title="12.png" alt="12.png" /></span> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="13.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277574i351B88FF14E2CB07/image-size/large?v=v2&amp;px=999" role="button" title="13.png" alt="13.png" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="14.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277575iBA6DA24ABFDCD153/image-size/large?v=v2&amp;px=999" role="button" title="14.png" alt="14.png" /></span></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="15.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277576i20DE5E2BEF354532/image-size/large?v=v2&amp;px=999" role="button" title="15.png" alt="15.png" /></span></P> </LI> </OL> </LI> <LI> <P>In Variables/Pipeline Variables, create and enter the following required values</P> <OL> <LI>ClientId: Follow<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">How to: Use the portal to create an Azure AD application and service principal that can access resources</A><SPAN>&nbsp;</SPAN>to create a service principal that can access Azure ML workspace</LI> <LI>ClientSecret: See the instruction for ClientId</LI> <LI>ResourceGroup: Resource Group for AML workspace</LI> <LI>SubscriptionId: Can be found on AML worksapce overview page.</LI> <LI>TenantId: Can be found in Azure Activate Directory</LI> <LI>WorkspaceName: AML workspace name<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="16.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277577i556F118ED971B86D/image-size/large?v=v2&amp;px=999" role="button" title="16.png" alt="16.png" /></span></LI> </OL> </LI> <LI> <P><SPAN>Save, create, and deploy release</SPAN><SPAN><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="17.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277578i8DD54BF656974B17/image-size/large?v=v2&amp;px=999" role="button" title="17.png" alt="17.png" /></span></SPAN></SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="18.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277580i3507C5E399C37208/image-size/large?v=v2&amp;px=999" role="button" title="18.png" alt="18.png" /></span></P> </LI> </OL> <H2>&nbsp;</H2> <H2>Testing</H2> <P>&nbsp;</P> <OL> <LI>Retrieve external IP for deployed service <OL> <LI>Open powershell</LI> <LI><CODE>az account set --subscription SUBSCRIPTION_ID</CODE></LI> <LI><CODE>az aks get-credentials --resource-group RESOURCE_GROUP_NAME --name AKS_CLUSTER_NAME</CODE></LI> <LI><CODE>kubectl get deployments --all-namespaces=true</CODE></LI> <LI>Find the<SPAN>&nbsp;</SPAN><CODE>aml-aks-onnx</CODE><SPAN>&nbsp;</SPAN>namespace, make sure it's ready</LI> <LI><CODE>kubectl get svc --namespace aml-aks-onnx</CODE>. External IP will be listed there</LI> </OL> </LI> <LI>Use<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">test.ipynb</A><SPAN>&nbsp;</SPAN>to test it out <OL> <LI>endpoint is<SPAN>&nbsp;</SPAN><CODE><A href="#" target="_blank">http://EXTERNAL_IP:80/score</A></CODE>. You can optionally set it to be<SPAN>&nbsp;</SPAN><CODE><A href="#" target="_blank">http://EXTERNAL_IP:80/healthcheck</A></CODE><SPAN>&nbsp;</SPAN>and then use the get method to do a quick health check</LI> <LI>In the post method section, make sure to enter the model name. In this demo, the model name is claim_classifier_onnx_demo. Enter any potential insurance claim text, and see the model classifies it into auto or home insurance claim in real time.<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="19.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277583iBF940927172F74FE/image-size/large?v=v2&amp;px=999" role="button" title="19.png" alt="19.png" /></span> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="20.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/277584i652EB2875C98015D/image-size/large?v=v2&amp;px=999" role="button" title="20.png" alt="20.png" /></span></P> </LI> </OL> </LI> </OL> <H2>&nbsp;</H2> <H2>License</H2> <P>MIT License</P> <P>Copyright (c) 2021 HZ-MS-CSA</P> <P>Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:</P> <P>The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.</P> <P>THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.</P> Tue, 05 Oct 2021 20:51:18 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/solution-template-for-deploying-azure-ml-models-to-aks-clusters/ba-p/2318919 Han_Zhang_CSA 2021-10-05T20:51:18Z Enable read-aloud for your application with Azure neural TTS https://gorovian.000webhostapp.com/?exam=t5/azure-ai/enable-read-aloud-for-your-application-with-azure-neural-tts/ba-p/2301422 <P><EM>This post is co-authored with Yulin Li, Yinhe Wei, Qinying Liao, Yueying Liu, Sheng Zhao </EM></P> <P>&nbsp;</P> <P>Voice is becoming increasingly popular in providing useful and engaging experiences for customers and employees. The Text-to-Speech (TTS) capability of Speech on Azure Cognitive Services allows you to quickly create intelligent read-aloud experience for your scenarios.</P> <P>&nbsp;</P> <P>In this blog, we’ll walk through an exercise which you can complete in under two hours, to get started using <A href="#" target="_blank" rel="noopener">Azure neural TTS voices</A>&nbsp;and enable your apps to read content aloud. We’ll provide high level guidance and sample code to get you started, and we encourage you to play around with the code and get creative with your solution!</P> <P>&nbsp;</P> <H1>What is read-aloud &nbsp;&nbsp;</H1> <P>&nbsp;</P> <P>Read-aloud is a modern way to help people to read and consume content like emails and word documents more easily. It is a popular feature in many Microsoft products, which has received highly positive user feedback. A few latest examples:</P> <UL> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Play My Emails:</STRONG></A> In outlook iOS, users can listen to their incoming email during the commute to the office. They can choose from a female and a male voice to read the email aloud, anytime their hands may be busy doing other things.</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Edge read aloud:</STRONG></A> In recent chromium-based edge browser, people can listen to the web pages or pdf documents when they are doing multi-tasking. The read-aloud voice quality has been enhanced with Azure neural TTS, which becomes the ‘favorite’ feature to many (Read the <A href="#" target="_blank" rel="noopener">full article</A>).</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Immersive reader</STRONG></A> is a free tool that uses proven techniques to improve reading for people regardless of their age or ability. It has adopted Azure neural voices to read aloud content to students.&nbsp;</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Listen to Word documents on mobile.</STRONG></A> This is an eyes-off, potentially hands-off modern consumption experience for those who want to do multitask on the go. In specific, this feature supports a longer listening scenario for document consumption, now available with Word on Android and iOS.</LI> </UL> <P>With all these examples and more, we’ve seen clear trending of providing voice experiences for users consuming content on the go, when multi-tasking, or for those who tend to read in an audible way. With Azure neural TTS, it is easy to implement your own read-aloud that is pleasant to listen to for your users. &nbsp;</P> <P>&nbsp;</P> <H1>The benefit of using Azure neural TTS for read-aloud</H1> <P>&nbsp;</P> <P>Azure neural TTS allows you to choose from <A href="#" target="_blank" rel="noopener">more than 140 highly realistic voices</A> across 60 languages and variants that enables fluid, natural-sounding speech, with rich customization capabilities available at the same time.&nbsp;</P> <P>&nbsp;</P> <H2>High AI quality</H2> <P>Why is neural TTS so much better? Traditional TTS is a multi-step pipeline, and a complex process. Each step could involve human, expert rules or individual models. There is no end-to-end optimization in between, so the quality is not optimal. The AI based neural TTS voice technology has simplified the pipeline into three major components. Each component can be modeled by advanced neural deep learning networks: a <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/unified-neural-text-analyzer-an-innovation-to-improve-neural-tts/ba-p/2102187" target="_blank" rel="noopener">neural text analysis module, </A>&nbsp;which generates more correct pronunciations for TTS to speak; a neural acoustic model, like <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener">uni-TTS </A>which predicts prosody much better than the traditional TTS, and a neural vocoder, like <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-neural-tts-upgraded-with-hifinet-achieving-higher-audio/ba-p/1847860" target="_blank" rel="noopener">HiFiNet </A>which creates audios in higher fidelity.</P> <P>&nbsp;</P> <P>With all these components, Azure neural TTS makes the listening experience much more enjoyable than the traditional TTS. Our studies repeatedly show that the read-aloud experience integrated with the highly natural voices on the Azure neural TTS platform can significantly increase the time that people spend on listening to the synthetic speech continuously, and greatly improve the effectiveness of their consumption of the audio content.</P> <P>&nbsp;</P> <H2>Broad locale coverage</H2> <P>Usually, the reading content is available in many different languages.&nbsp; To read aloud more content and reach more users, TTS needs to support various locales.&nbsp; Azure neural TTS now supports more than 60 languages off the shelf. Check out the details in the&nbsp;<A href="#" target="_blank" rel="noopener">full language list.</A></P> <P>&nbsp;</P> <P>By offering more voices across more languages and locales, we anticipate developers across the world will be able to build applications that change experiences for millions. With our <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener">innovative voice models in the low-resource setting</A>, we can also extend to new languages much faster than ever.</P> <P>&nbsp;</P> <H2>Rich speaking styles</H2> <P>Azure neural TTS provides you a rich choice of different styles that resonate your content. For example, the newscast style is optimized for news content reading in a professional tone. The customer service style supports you to create a more friendly reading experience for conversational content focusing on customer support. In addition, various emotional styles and role-play capabilities can be used to create vivid audiobooks in synthetic voices.</P> <P>&nbsp;</P> <P>Here are some examples of the voices and styles used for different types of content. &nbsp;</P> <P>&nbsp;</P> <TABLE style="width: 900px;" width="900"> <TBODY> <TR> <TD width="144"> <P><STRONG>Language</STRONG></P> </TD> <TD width="156"> <P><STRONG>Content type</STRONG></P> </TD> <TD width="116"> <P><STRONG>Sample</STRONG></P> </TD> <TD width="207"> <P><STRONG>Note</STRONG></P> </TD> </TR> <TR> <TD width="144"> <P>English (US)</P> </TD> <TD width="156"> <P>Newscast</P> </TD> <TD width="116"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Aria-news.wav"></SOURCE></AUDIO></TD> <TD width="207"> <P>Aria, in the newscast style</P> </TD> </TR> <TR> <TD width="144"> <P>English (US)</P> </TD> <TD width="156"> <P>Newscast</P> </TD> <TD width="116"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Guy-news.wav"></SOURCE></AUDIO></TD> <TD width="207"> <P>Guy, in the general/default style</P> </TD> </TR> <TR> <TD width="144"> <P>English (US)</P> </TD> <TD width="156"> <P>Conversational</P> </TD> <TD width="116"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Jenny-chat.wav"></SOURCE></AUDIO></TD> <TD width="207"> <P>Jenny, in the chat style</P> </TD> </TR> <TR> <TD width="144"> <P>English (US)</P> </TD> <TD width="156"> <P>Audiobook</P> </TD> <TD width="116"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Jenny-audiobook.wav"></SOURCE></AUDIO></TD> <TD width="207"> <P>Jenny, in multiple styles</P> </TD> </TR> <TR> <TD width="144"> <P>Chinese (Mandarin, simplified)</P> </TD> <TD width="156"> <P>Newscast</P> </TD> <TD width="116"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Yunyang-news.wav"></SOURCE></AUDIO></TD> <TD width="207"> <P>Yunyang, in the newscast style</P> </TD> </TR> <TR> <TD width="144"> <P>Chinese (Mandarin, simplified)</P> </TD> <TD width="156"> <P>Conversational</P> </TD> <TD width="116"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/Yunxi-assistant.wav"></SOURCE></AUDIO></TD> <TD width="207"> <P>Yunxi, in the assistant style</P> </TD> </TR> <TR> <TD width="144"> <P>Chinese (Mandarin, simplified)</P> </TD> <TD width="156"> <P>Audiobook</P> </TD> <TD width="116"><AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/zh-CN-audio.wav"></SOURCE></AUDIO></TD> <TD width="207"> <P>Multiple voices used: Xiaoxiao and Yunxi</P> <P>&nbsp;</P> <P>Different styles used: lyrical, calm, angry, disgruntled, angry, embarrassed, with different style degrees applied</P> <P>&nbsp;</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>These styles can be adjusted using <A href="#" target="_blank" rel="noopener">SSML</A>, together with other tuning capabilities, including rate, pitch, pronunciation, pauses, and more.</P> <P>&nbsp;</P> <H2>Powerful customization capabilities</H2> <P>Besides the rich choice of prebuilt neural voices, Azure TTS provides you a powerful capability to create a one-of-a-kind custom voice that can differentiate your brand from others. Using <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/build-a-natural-custom-voice-for-your-brand/ba-p/2112777" target="_blank" rel="noopener">Custom Neural Voice</A>, you can build a highly realistic voice using less than 30 minutes of audio as training data. You can then use your customized voices to create a unique read-aloud experience that reflects your brand identity or resonate the characteristics of your content.</P> <P>&nbsp;</P> <P>Next, we’ll walk you through the coding exercise of developing the read-aloud feature with Azure neural TTS. &nbsp;</P> <P>&nbsp;</P> <H1>How to build read-aloud features with your app &nbsp;&nbsp;&nbsp;</H1> <P>&nbsp;</P> <P>It is incredibly easy to add the read-aloud capability using Azure neural TTS to your application with the Speech SDK. &nbsp;Below we describe two typical designs to enable read-aloud for different scenarios.</P> <P>&nbsp;</P> <H2>Prerequisites</H2> <P>If you don't have an Azure subscription, create a&nbsp;<A href="#" target="_blank" rel="noopener">free account</A>&nbsp;before you begin. If you have a subscription, log in to the&nbsp;<A href="#" target="_blank" rel="noopener">Azure Portal</A> and <A href="#" target="_blank" rel="noopener">create a Speech resource</A>.</P> <P>&nbsp;</P> <H2>Client-side read-aloud</H2> <P>In this design, the client directly interacts with Azure TTS using the Speech SDK. &nbsp;The following steps with the JavaScript code sample provide you the basic process to implement the read-aloud.</P> <P>&nbsp;</P> <H3>Step 1: Create synthesizer</H3> <P>First, create the synthesizer with the selected language and voices. Make sure you select a neural voice to get the best quality.&nbsp;</P> <P>&nbsp;</P> <LI-CODE lang="javascript">const config = SpeechSDK.SpeechConfig. fromAuthorizationToken(“YourAuthorizationToken”, “YourSubscriptionRegion”); config.SpeechSynthesisVoiceName = voice; config.speechSynthesisOutputFormat = SpeechSDK.SpeechSynthesisOutputFormat.Riff24Khz16BitMonoPcm; // set the endpoint id if you are using custom voice // config.endpointId = "YourEndpointId"; const player = new SpeechSDK.SpeakerAudioDestination(); const audioConfig = SpeechSDK.AudioConfig.fromSpeakerOutput(player); var synthesizer = new SpeechSDK.SpeechSynthesizer(config, audioConfig); </LI-CODE> <P>&nbsp;</P> <P>Then you can hook up the events from the synthesizer. The event will be used to update the UX while the read-aloud is on.</P> <P>&nbsp;</P> <LI-CODE lang="javascript">player.onAudioEnd = function (_) { window.console.log("playback finished"); // update your UX };</LI-CODE> <P>&nbsp;</P> <H3>Step 2: Collect word boundary events</H3> <P>The word boundary event is fired during synthesis. Usually, the synthesis speed is much faster than the playback speed of the audio. The word boundary event is fired before you get the corresponding audio chunks. The application can collect the event and the time stamp information of the audio for your next step.</P> <P>&nbsp;</P> <LI-CODE lang="javascript">var wordBoundaryList = []; synthesizer.wordBoundary = function (s, e) { window.console.log(e); wordBoundaryList.push(e); };</LI-CODE> <P>&nbsp;</P> <H3>Step 3: Highlight word boundary during audio playback</H3> <P>You can then highlight the word as the audio plays, using the code sample below.</P> <P>&nbsp;</P> <LI-CODE lang="javascript">setInterval(function () { if (player !== undefined) { const currentTime = player.currentTime; var wordBoundary; for (const e of wordBoundaryList) { if (currentTime * 1000 &gt; e.audioOffset / 10000) { wordBoundary = e; } else { break; } } if (wordBoundary !== undefined) { highlightDiv.innerHTML = synthesisText.value.substr(0, wordBoundary.textOffset) + "" + wordBoundary.text + "" + synthesisText.value.substr(wordBoundary.textOffset + wordBoundary.wordLength); } else { highlightDiv.innerHTML = synthesisText.value; } } }, 50);</LI-CODE> <P>&nbsp;</P> <P>See the <A href="#" target="_blank" rel="noopener">full example here</A>&nbsp;for more details.</P> <P>&nbsp;</P> <H2>Server-side read-aloud</H2> <P>In this design, the client interacts with a middle layer service, which then interacts with Azure TTS through the Speech SDK.&nbsp;It is suitable for below scenarios:</P> <UL> <LI>It is required to put the authentication secret (e.g., subscription key) on the server side.</LI> <LI>There could be additional related business logics such as text preprocessing, audio postprocessing etc.</LI> <LI>There is already a service to interact with the client application.&nbsp;</LI> </UL> <P>&nbsp;</P> <P>Below is a reference architecture for such design:</P> <DIV id="tinyMceEditorQinying Liao_3" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="read_aloud_server_scenario.png" style="width: 924px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/276221i6FF5767F0889B34B/image-dimensions/924x511?v=v2" width="924" height="511" role="button" title="read_aloud_server_scenario.png" alt="Reference architecture design for the server-side read-aloud" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Reference architecture design for the server-side read-aloud</span></span></P> <P>The roles of each component in this architecture are described below.</P> <UL> <LI><STRONG>Azure Cognitive Services - TTS</STRONG>: the cloud API provided by Microsoft Azure, which converts text to human-like natural speech.</LI> <LI><STRONG>Middle Layer Service</STRONG>: the service built by you or your organization, which serves your client app by hosting the cross-device / cross-platform business logics.</LI> <LI><STRONG>TTS Handler</STRONG>: the component to handle TTS related business logics, which takes below responsibilities: <UL> <LI>Wraps the Speech SDK to call the Azure TTS API.</LI> <LI>Receives the text from the client app and makes preprocessing if necessary, then sends it to the Azure TTS&nbsp; API through the Speech SDK.</LI> <LI>Receives the audio stream and the TTS events (e.g., word boundary events) from Azure TTS, then makes postprocessing if necessary, and sends them to the client app.</LI> </UL> </LI> <LI><STRONG>Client App</STRONG>: your app running on the client side, which interacts with end users directly. It takes below responsibilities: <UL> <LI>Sends the text to your service (“Middle Layer Service”).</LI> <LI>Receives the audio stream and TTS events from your service (“Middle Layer Service”), and plays the audio to your end users, with UI rendering like real-time text highlight with the word boundary events.</LI> </UL> </LI> </UL> <P>&nbsp;</P> <P>Check here for the <A href="#" target="_blank" rel="noopener">sample code to call Azure TTS API from server</A>.</P> <P>&nbsp;</P> <P>Comparing to the client-side read-aloud design, the server-side read-aloud is a more advanced solution. It can cost higher but is more powerful to handle more complicated requirements.</P> <P>&nbsp;</P> <H1>Recommended practices for building a read-aloud experience</H1> <P>&nbsp;</P> <P>The section above shows you how to build a read-aloud feature in the client and service scenarios. Below are some recommended practices that can help to make your development more efficient and improve your service experience.</P> <P>&nbsp;</P> <H2>Segmentation</H2> <P>When the content to read is long, it’s a good practice to always segment your reading content to sentences or short paragraphs in each request. Such segmentation has several benefits.</P> <UL> <LI>The response is faster for shorter content.</LI> <LI>Long synthesized audio will cost more memory.</LI> <LI>Azure speech synthesis API requires the synthesized audio length to be less than 10 minutes. If your audio exceeds 10 minutes, it will be truncated to 10 minutes.</LI> </UL> <P>Using the Speech SDK’s <A href="#" target="_blank" rel="noopener">PullAudioOutputStream</A>, the synthesized audio in each turn could be easily merged into one stream.</P> <P>&nbsp;</P> <H2>Streaming</H2> <P>Streaming is critical to lower the latency. When the first audio chunk is available, you can start the playback or start to forward the audio chunks immediately to your clients.&nbsp;The Speech SDK provides&nbsp;<A href="#" target="_blank" rel="noopener">PullAudioOutputStream</A>,&nbsp;<A href="#" target="_blank" rel="noopener">PushAudioOutputStream</A>,&nbsp;<A href="#" target="_blank" rel="noopener">Synthesizing&nbsp;event</A>, and&nbsp;<A href="#" target="_blank" rel="noopener">AudioDateStream</A>&nbsp;for streaming. You can select the one that best suites the architecture of your application. Find the <A href="#" target="_blank" rel="noopener">samples here</A>.</P> <P>&nbsp;</P> <P>Besides, with the stream objects of the Speech SDK, you can get the seek-able in-memory audio stream, which works easily for any downstream services.</P> <P>&nbsp;</P> <H1>Tell us your experiences!</H1> <P>&nbsp;</P> <P>Whether you are building a voice-enabled chatbot or IoT device, an IVR solution, adding read-aloud features to your app, converting e-books to audio books, or even adding Speech to a translation app, you can make all these experiences natural sounding and fun with Neural TTS.</P> <P>&nbsp;</P> <P>Let us know how you are using or plan to use Neural TTS voices in this&nbsp;<A href="#" target="_blank" rel="noopener">form</A>. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing about your experience and developing more compelling services together with you for the developers around the world.</P> <P>&nbsp;</P> <H1>Get started</H1> <P><A href="#" target="_blank" rel="noopener">Add voice to your app in 15 minutes</A></P> <P><A href="#" target="_blank" rel="noopener">Explore the available voices in this demo</A></P> <P><A href="#" target="_blank" rel="noopener">Build a voice-enabled bot</A></P> <P><A href="#" target="_blank" rel="noopener">Deploy Azure TTS voices on prem with Speech Containers</A></P> <P><A href="#" target="_blank" rel="noopener">Build your custom voice</A></P> <P><A href="#" target="_blank" rel="noopener">Learn more about other Speech scenarios</A></P> <H1>&nbsp;</H1> Thu, 29 Apr 2021 06:22:10 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/enable-read-aloud-for-your-application-with-azure-neural-tts/ba-p/2301422 Qinying Liao 2021-04-29T06:22:10Z Localize your website with Microsoft Translator https://gorovian.000webhostapp.com/?exam=t5/azure-ai/localize-your-website-with-microsoft-translator/ba-p/2282003 <H1>Web Localization and Ecommerce</H1> <P>Using Microsoft Azure Translator service, you can localize your website in a cost-effective way. With the advent of the internet, the world has become a much smaller place. Loads of information are stored and transmitted between cultures and countries, giving us all the ability to learn and grow from each other. Powered by advanced deep learning, Microsoft Azure Translator delivers fast and high-quality neural machine-based language translations, empowering you to break through language barriers and take advantage of all these powerful vehicles of knowledge and data transfer.</P> <P>Research shows that 40% of internet users will never buy from websites in a foreign language[1]. Machine translation from Azure, supporting over <A href="#" target="_blank" rel="noopener">90 languages and dialects</A>, helps you go to market faster and reach buyers in their native languages by localizing your web assets: from your marketing pages to user-generated content, and everything in-between.</P> <P>Up to 95% of the online content that companies generate is available in only one language. This is because localizing websites, especially beyond the home page, is cost prohibitive outside of the top few markets. As a result, localized content seldom extends one or two clicks beyond a home page. However, with machine translation from Azure Translator Service, content that wouldn’t otherwise be localized can be, and now most of your content can reach customers and partners worldwide.</P> <P>&nbsp;</P> <H1>How to localize your website in a cost-effective way?</H1> <P>The first step is to understand the nature of your website content and classify them. It is critical as each of them needs different levels of localization. There are four types of content: a) static and dynamic, b) generated by you and posted by customer, c) sensitive like ‘Terms of Use’, d) part of UX elements.</P> <P>Static content like about the organization, product or service description, user guides, terms of use, etc. can be translated once (or less frequently) offline into all required target languages.&nbsp; Translation results could be cached and served from your webserver. &nbsp;This could substantially reduce the cost of translation.&nbsp; Machine translation models which powers Azure Translator service are regularly updated to improve quality. Hence consider refreshing the translations once a quarter if not every month.</P> <P>User generated content like customer reviews, information requests, etc. are dynamic in nature, not all of them requiring translations, and to be translated on need basis only. You could plan for an UX element in the webpage which could initiate translation on need basis. Target language for translation could be identified based on user browser language. Likewise, responses to customer could be translated back into the language of original request or comment.</P> <P>Sensitive content like terms of use, company policies, are recommended to do a human review post-machine translation.&nbsp;</P> <P>Text in UX elements of the webpage like menu, labels in forms, etc. are typically one or two words and have restricted space.&nbsp; Hence recommended to do a UX testing post translation for fit and finish.&nbsp; If necessitates look for alternate translation or human review.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Localization.png" style="width: 687px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274754iD39DDD8C6164BB4E/image-dimensions/687x374?v=v2" width="687" height="374" role="button" title="Localization.png" alt="Localization.png" /></span></P> <P>&nbsp;</P> <P>Due to the speed and cost-effective nature that Azure Translator Service provides, you can easily test which localization option is optimal for your business and your users. For example, you may only have the budget to localize in dozens of languages and measure customer traffic in multiple markets in parallel. Using your existing web analytics, you will be able to decide where to invest in human translation in terms of markets, languages, or pages. For example, if the machine translated information passes a defined page view threshold, your system may trigger a human review of that content. In addition, you will still be able to maintain machine translation for other areas, to maintain reach.</P> <P>By combining pure machine translation and paid translation resources, you can select different quality levels for the translations based on your business needs.</P> <P>&nbsp;</P> <H1>How to use Azure Translator service to translate static content</H1> <P>Pre-requisite:</P> <UL> <LI>Create an <A href="#" target="_blank" rel="noopener">Azure subscription</A></LI> <LI>Once you have an Azure subscription,&nbsp;<A href="#" target="_blank" rel="noopener">create a Translator resource</A>&nbsp;in the Azure portal.</LI> <LI>Once Translator resource it created, go to the resource, and select&nbsp;‘Keys and Endpoint’ which is used to connect your application to the Translator service.</LI> </UL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Krishna_Doss_2-1619114278286.png" style="width: 394px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274756iF3BCD1903DCE12C5/image-dimensions/394x503?v=v2" width="394" height="503" role="button" title="Krishna_Doss_2-1619114278286.png" alt="Krishna_Doss_2-1619114278286.png" /></span><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Krishna_Doss_3-1619114278302.png" style="width: 495px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/274755iDEB45D2A41247B6B/image-dimensions/495x300?v=v2" width="495" height="300" role="button" title="Krishna_Doss_3-1619114278302.png" alt="Krishna_Doss_3-1619114278302.png" /></span></P> <P>&nbsp;</P> <P><FONT size="5"><U>Translating static webpage content</U>:</FONT></P> <P>Below code sample shows how to translate an element in the webpage.&nbsp; You could use it and iterate for each element in your webpage requiring translation.</P> <P>&nbsp;</P> <LI-CODE lang="python">import os, requests, uuid, json subscription_key = "YOUR_SUBSCRIPTION_KEY" endpoint = "https://api.cognitive.microsofttranslator.com" path = '/translate' constructed_url = endpoint + path params = { 'api-version': '3.0', 'to': ['de'], # target language 'textType': 'html' } headers = { 'Ocp-Apim-Subscription-Key': subscription_key, 'Content-type': 'applicationhttps://techcommunity.microsoft.com/json', 'X-ClientTraceId': str(uuid.uuid4()) } # You can pass more than one object in body. body = [{ "text": "&lt;p&gt;The samples on this page use hard-coded keys and endpoints for simplicity. \ Remember to &lt;strong&gt;remove the key from your code when you're done&lt;/strong&gt;, and \ &lt;strong&gt;never post it publicly&lt;/strong&gt;. For production, consider using a secure way of \ storing and accessing your credentials. See the Cognitive Services security article \ for more information.&lt;/p&gt;" }] request = requests.post(constructed_url, params=params, headers=headers, json=body) response = request.json() print (response[0]['translations'][0]['text']) # shows how to access the translated text from response</LI-CODE> <P>&nbsp;</P> <P>&nbsp;</P> <P>Localization is just a fraction of the things that you can do with Translator, so don't let the learning stop here. Check out recent new Translator features, additional doc links to dive deeper, and join the Translator Ask Microsoft Anything session on 4/27.</P> <P>&nbsp;</P> <P><FONT size="5"><U>Get started</U>:</FONT></P> <UL> <LI>Sign up for <A href="#" target="_blank" rel="noopener">Azure trial</A></LI> <LI>Join Translator engineering team on <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai-ama/4-27-21-translator-within-azure-cognitive-services-ama/m-p/2275137" target="_blank" rel="noopener">Ask Microsoft Anything on 4/27</A></LI> <LI>Learn about <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/translator-announces-document-translation-preview/ba-p/2144185" target="_blank" rel="noopener">Document Translation (Preview)</A></LI> <LI><A href="#" target="_blank" rel="noopener">Create a language translator application with Unity and Azure Cognitive Services</A></LI> <LI><A href="#" target="_blank" rel="noopener">Translator documentation</A></LI> </UL> <P>&nbsp;</P> <P data-unlink="true"><SPAN>[1]</SPAN>&nbsp; CSA Research – Can’t Read, Won’t Buy – B2C Analyzing Consumer Language Preferences and Behaviors in 29 Countries <A href="#" target="_blank" rel="noopener">https://insights.csa-research.com/reportaction/305013126/Marketing</A></P> Thu, 22 Apr 2021 23:57:02 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/localize-your-website-with-microsoft-translator/ba-p/2282003 Krishna_Doss 2021-04-22T23:57:02Z Big data preparation in Azure Machine Learning – powered by Azure Synapse Analytics https://gorovian.000webhostapp.com/?exam=t5/azure-ai/big-data-preparation-in-azure-machine-learning-powered-by-azure/ba-p/2278671 <P>Many customers who embark on a machine learning journey deal with big data, and need the power of distributed data processing engines to prepare their data for ML. By offering Apache Spark® (powered by Azure Synapse Analytics) in Azure Machine Learning (Azure ML), we are empowering customers to work on their end-to-end ML lifecycle including large-scale data preparation, featurization, model training, and deployment within Azure ML workspace without the need to switching between multiple tools for data preparation and model training. <SPAN>The ability to build the full ML lifecycle</SPAN> within Azure ML will reduce the time required for customers to iterate on a machine learning project which typically includes multiple rounds of data preparation and training.&nbsp;</P> <P>&nbsp;</P> <P><SPAN>With the preview of managed Apache Spark in Azure ML, customers can use Azure ML notebooks to connect to Spark pools in Azure Synapse Analytics, to do interactive data preparation using&nbsp;PySpark. Customers have the&nbsp;option to configure&nbsp;Spark sessions to quickly experiment and iterate on the data. Once ready, they can leverage Azure ML pipelines to automate their end-to-end ML workflow from data preparation to model deployment all in one environment, </SPAN><SPAN>while maintaining their data and model lineage. Customers who prefer to train in the Spark environment can choose to install relevant libraries such as Spark MLlib, MMLSpark, etc. to complete their training on Spark pools.</SPAN></P> <P>&nbsp;</P> <P>Customers in preview will be able to benefit from the following key capabilities:</P> <P><STRONG>Reuse Spark pools from Azure Synapse workspace in Azure ML </STRONG></P> <P>Customers can leverage existing Spark pools from Azure Synapse Analytics (Azure Synapse) in Azure ML by just linking their Azure ML and Synapse workspaces via the Azure ML Studio, the Python SDK, or the ARM template. Customers just need to follow the widget in UI or leverage a few lines of code as described in the documentation <A href="#" target="_blank" rel="noopener">here</A>.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture1.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273990iC3748239EEF8CA15/image-size/large?v=v2&amp;px=999" role="button" title="Picture1.png" alt="Picture1.png" /></span></P> <P>Once the workspaces are linked, customers can <A href="#" target="_blank" rel="noopener">attach existing Spark pools</A> into Azure ML workspace and can also register the <A href="#" target="_blank" rel="noopener">supported linked services (data store sources)</A>.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture2.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273991i887D228F5BCC4424/image-size/large?v=v2&amp;px=999" role="button" title="Picture2.png" alt="Picture2.png" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P><STRONG>Perform interactive data preparation via Spark magic from Azure ML notebooks </STRONG></P> <P><SPAN>Customers can use Azure ML notebooks to start Spark sessions in PySpark via Spark Magic on attached Spark pools. Customers can register Azure ML datasets to load data from storage of choice. For data in Gen1 and Gen2, customers can use their own identities to authenticate access to data by leveraging AML datasets. The attached Spark pools can be used normally in Azure ML experiments, pipelines, and designer. More information on <A href="#" target="_blank" rel="noopener">leveraging Spark Magic for data preparation on AML notebooks here</A></SPAN></P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture3.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/273992i5889A7B6E2B1AC1A/image-size/large?v=v2&amp;px=999" role="button" title="Picture3.png" alt="Picture3.png" /></span></P> <P><STRONG>&nbsp;</STRONG></P> <P>&nbsp;</P> <P><STRONG>Productionize via Azure ML pipelines to orchestrate E2E ML steps including data preparation</STRONG></P> <P><SPAN>After completing the interactive data preparation, customers can leverage Azure ML pipelines to automate data preparation on Apache Spark runtime as a step in the overall machine learning workflow. Customers can use the SynapseSparkStep for data preparation and choose either TabularDataset&nbsp;or FileDataset as input. Customers can also set up HDFSOutputDatasetConfig to generate the sparkstep output as a FileDataset, to be consumed by the following AzureML pipeline step. More details on <A href="#" target="_blank" rel="noopener">How to use Apache Spark (powered by Azure Synapse) in your machine learning pipeline here</A>.</SPAN></P> <P>&nbsp;</P> <H2><SPAN>Get started with big data preparation in Azure ML via Apache Spark powered by Azure Synapse</SPAN></H2> <P>Get started by visiting our&nbsp;<A href="#" target="_blank" rel="noopener">documentation</A>&nbsp;and let us know your thoughts. We are committed to making the data preparation experience in Azure ML better for you!</P> <P>Learn more about the&nbsp;<A href="#" target="_blank" rel="noopener">Azure Machine Learning service</A>&nbsp;and&nbsp;<A href="#" target="_blank" rel="noopener">get started with a free trial</A>.</P> <UL> <LI><A href="#" target="_blank" rel="noopener">Learn more about Azure Synapse big data preparation experience in Azure ML</A></LI> <LI><A href="#" target="_blank" rel="noopener">Learn more about how to use Apache Spark in your machine learning pipelines</A></LI> <LI>Learn more about <A href="#" target="_blank" rel="noopener">Apache Spark</A></LI> <LI>Learn more about <A href="#" target="_blank" rel="noopener">Azure Synapse Analytics</A></LI> </UL> Tue, 20 Apr 2021 16:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/big-data-preparation-in-azure-machine-learning-powered-by-azure/ba-p/2278671 Xun_Wang 2021-04-20T16:00:00Z Analyzing COVID Medical Papers with Azure and Text Analytics for Health https://gorovian.000webhostapp.com/?exam=t5/azure-ai/analyzing-covid-medical-papers-with-azure-and-text-analytics-for/ba-p/2269890 <H2>Automatic Paper Analysis</H2> <DIV> <DIV><SPAN>Automatic&nbsp;scientific&nbsp;paper&nbsp;analysis&nbsp;is&nbsp;fast&nbsp;growing&nbsp;area&nbsp;of&nbsp;studies,&nbsp;and&nbsp;due&nbsp;to&nbsp;recent&nbsp;improvements&nbsp;in&nbsp;NLP&nbsp;techniques&nbsp;is&nbsp;has&nbsp;been&nbsp;greatly&nbsp;improved&nbsp;in&nbsp;the&nbsp;recent&nbsp;years.&nbsp;In&nbsp;this&nbsp;post,&nbsp;we&nbsp;will&nbsp;show&nbsp;you&nbsp;how&nbsp;to&nbsp;derive&nbsp;specific&nbsp;insights&nbsp;from&nbsp;COVID&nbsp;papers,&nbsp;such&nbsp;as&nbsp;changes&nbsp;in&nbsp;medical&nbsp;treatment&nbsp;over&nbsp;time,&nbsp;or&nbsp;joint&nbsp;treatment&nbsp;strategies&nbsp;using&nbsp;several&nbsp;medications:</SPAN></DIV> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_1-1618308829590.png" style="width: 625px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272390i95B16A643F02EDE5/image-dimensions/625x200?v=v2" width="625" height="200" role="button" title="shwars_1-1618308829590.png" alt="shwars_1-1618308829590.png" /></span> <P>&nbsp;</P> <DIV><SPAN>The&nbsp;main&nbsp;idea&nbsp;the&nbsp;approach&nbsp;I&nbsp;will&nbsp;describe&nbsp;in&nbsp;this&nbsp;post&nbsp;is&nbsp;to&nbsp;extract&nbsp;as&nbsp;much&nbsp;semi-structured&nbsp;information&nbsp;from&nbsp;text&nbsp;as&nbsp;possible,&nbsp;and&nbsp;then&nbsp;store&nbsp;it&nbsp;into&nbsp;some&nbsp;NoSQL&nbsp;database&nbsp;for&nbsp;further&nbsp;processing.&nbsp;Storing&nbsp;information&nbsp;in&nbsp;the&nbsp;database&nbsp;would&nbsp;allow&nbsp;us&nbsp;to&nbsp;make&nbsp;some&nbsp;very&nbsp;specific&nbsp;queries&nbsp;to&nbsp;answer&nbsp;some&nbsp;of&nbsp;the&nbsp;questions,&nbsp;as&nbsp;well&nbsp;as&nbsp;to&nbsp;provide&nbsp;visual&nbsp;exploration&nbsp;tool&nbsp;for&nbsp;medical&nbsp;expert&nbsp;for&nbsp;structured&nbsp;search&nbsp;and&nbsp;insight&nbsp;generation.&nbsp;The&nbsp;overall&nbsp;architecture&nbsp;of&nbsp;the&nbsp;proposed&nbsp;system&nbsp;is&nbsp;shown&nbsp;below:</SPAN></DIV> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ta-diagram.png" style="width: 645px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272392iA187E1F819B9C347/image-dimensions/645x142?v=v2" width="645" height="142" role="button" title="ta-diagram.png" alt="ta-diagram.png" /></span></DIV> <DIV><SPAN>We will use different Azure technologies to gain insights into the paper corpus, such as&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Text Analytics for Health</A><SPAN>,&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">CosmosDB</A><SPAN>&nbsp;and&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">PowerBI</A><SPAN>. Now let’s focus on individual parts of this diagram and discuss them in detail.</SPAN></DIV> <DIV>&nbsp;</DIV> <BLOCKQUOTE>If you want to experiment with text analytics yourself - you will need an Azure Account. You can always get&nbsp;<A href="#" target="_blank" rel="noopener">free trial</A><SPAN>&nbsp;if you do not have one. And you may also want to check out&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">other AI technologies for developers</A></BLOCKQUOTE> <H2>&nbsp;</H2> <H2 id="covid-scientific-papers-and-cord-dataset">COVID Scientific Papers and CORD Dataset</H2> <P>The idea to apply<SPAN>&nbsp;</SPAN><ABBR title="Natural Language Processing - a branch of AI that deals with some semantical text understanding">NLP</ABBR><SPAN>&nbsp;</SPAN>methods to scientific literature seems quite natural. First of all, scientific texts are already well-structured, they contain things like keywords, abstract, as well as well-defined terms. Thus, at the very beginning of COVID pandemic, a<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">research challenge has been launched on Kaggle</A><SPAN>&nbsp;</SPAN>to analyze scientific papers on the subject. The dataset behind this competition is called<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">CORD</A><SPAN>&nbsp;</SPAN>(<A href="#" target="_blank" rel="noopener">publication</A>), and it contains constantly updated corpus of everything that is published on topics related to COVID. Currently, it contains more than 400000 scientific papers, about half of them - with full text.</P> <P>This dataset consists of the following parts:</P> <UL> <LI><STRONG>Metadata file</STRONG><SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Metadata.csv</A><SPAN>&nbsp;</SPAN>contains most important information for all publications in one place. Each paper in this table has unique identifier<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">cord_uid</CODE><SPAN>&nbsp;</SPAN>(which in fact does not happen to be completely unique, once you actually start working with the dataset). The information includes: <UL> <LI>Title of publication</LI> <LI>Journal</LI> <LI>Authors</LI> <LI>Abstract</LI> <LI>Data of publication</LI> <LI>doi</LI> </UL> </LI> <LI><STRONG>Full-text papers</STRONG><SPAN>&nbsp;</SPAN>in<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">document_parses</CODE><SPAN>&nbsp;</SPAN>directory, than contain structured text in JSON format, which greatly simplifies the analysis.</LI> <LI>Pre-built<SPAN>&nbsp;</SPAN><STRONG>Document Embeddings</STRONG><SPAN>&nbsp;</SPAN>that maps<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">cord_uid</CODE>s to float vectors that reflect some overall semantics of the paper.</LI> </UL> <P>In this post, we will focus on paper abstracts, because they contain the most important information from the paper. However, for full analysis of the dataset, it definitely makes sense to use the same approach on full texts as well.</P> <H2>&nbsp;</H2> <H2 id="what-ai-can-do-with-text">What AI Can Do with Text?</H2> <P>In the recent years, there has been a huge progress in the field of Natural Language Processing, and very powerful neural network language models have been trained. In the area of<SPAN>&nbsp;</SPAN><ABBR title="Natural Language Processing - a branch of AI that deals with some semantical text understanding">NLP</ABBR>, the following tasks are typically considered:</P> <DL> <DT>Text classification / intent recognition</DT> <DD>In this task, we need to classify a piece of text into a number of categories. This is a typical classification task. Sentiment Analysis</DD> <DD>We need to return a number that shows how positive or negative the text is. This is a typical regression task. Named Entity Recognition (<ABBR title="Named Entity Recognition">NER</ABBR>)</DD> <DD>In<SPAN>&nbsp;</SPAN><ABBR title="Named Entity Recognition">NER</ABBR>, we need to extract named entities from text, and determine their type. For example, we may be looking for names of medicines, or diagnoses. Another task similar to<SPAN>&nbsp;</SPAN><ABBR title="Named Entity Recognition">NER</ABBR><SPAN>&nbsp;</SPAN>is<SPAN>&nbsp;</SPAN><STRONG>keyword extraction</STRONG>.</DD> <DD><STRONG>Text summarization</STRONG></DD> <DD>Here we want to be able to produce a short version of the original text, or to select the most important pieces of text.</DD> <DD><STRONG>Question Answering</STRONG></DD> <DD>In this task, we are given a piece of text and a question, and our goal is to find the exact answer to this question from text.</DD> <DD><STRONG>Open-Domain Question Answering (<ABBR title="Open Domain Question Answering">ODQA</ABBR>)</STRONG></DD> <DD>The main difference from previous task is that we are given a large corpus of text, and we need to find the answer to our question somewhere in the whole corpus.</DD> </DL> <BLOCKQUOTE> <P>In<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">one of my previous posts</A>, I have described how we can use<SPAN>&nbsp;</SPAN><ABBR title="Open Domain Question Answering">ODQA</ABBR><SPAN>&nbsp;</SPAN>approach to automatically find answers to specific COVID questions. However, this approach is not suitable for serious research.</P> </BLOCKQUOTE> <P>To make some insights from text,<SPAN>&nbsp;</SPAN><ABBR title="Named Entity Recognition">NER</ABBR><SPAN>&nbsp;</SPAN>seems to be the most prominent technique to use. If we can understand specific entities that are present in text, we could then perform semantically rich search in text that answers specific questions, as well as obtain data on co-occurrence of different entities, figuring out specific scenarios that interest us.</P> <P>To train<SPAN>&nbsp;</SPAN><ABBR title="Named Entity Recognition">NER</ABBR><SPAN>&nbsp;</SPAN>model, as well as any other neural language model, we need a reasonably large dataset that is properly marked up. Finding those datasets is often not an easy task, and producing them for new problem domain often requires initial human effort to mark up the data.</P> <H2>&nbsp;</H2> <H2 id="pre-trained-language-models">Pre-Trained Language Models</H2> <P>Luckily, modern<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">transformer language models</A><SPAN>&nbsp;</SPAN>can be trained in semi-supervised manner using transfer learning. First, the base language model (for example,<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><ABBR title="Bidirectional Encoder Representations from Transformers - relatively modern language model">BERT</ABBR></A>) is trained on a large corpus of text first, and then can be specialized to a specific task such as classification or<SPAN>&nbsp;</SPAN><ABBR title="Named Entity Recognition">NER</ABBR><SPAN>&nbsp;</SPAN>on a smaller dataset.</P> <P>This transfer learning process can also contain additional step - further training of generic pre-trained model on a domain-specific dataset. For example, in the area of medical science Microsoft Research has pre-trained a model called<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">PubMedBERT</A><SPAN>&nbsp;</SPAN>(<A href="#" target="_blank" rel="noopener">publication</A>), using texts from PubMed repository. This model can then be further adopted to different specific tasks, provided we have some specialized datasets available.<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pubmedbert.png" style="width: 470px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272398i48A36098BF831B6F/image-dimensions/470x352?v=v2" width="470" height="352" role="button" title="pubmedbert.png" alt="pubmedbert.png" /></span></P> <H2 id="text-analytics-cognitive-services">Text Analytics Cognitive Services</H2> <P>However, training a model requires a lot of skills and computational power, in addition to a dataset. Microsoft (as well as some other large cloud vendors) also makes some pre-trained models available through the<SPAN>&nbsp;</SPAN><ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely">REST</ABBR><SPAN>&nbsp;</SPAN><ABBR title="Application Programming Interface">API</ABBR>. Those services are called<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Cognitive Services</A>, and one of those services for working with text is called<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Text Analytics</A>. It can do the following:</P> <UL> <LI><STRONG>Keyword extraction</STRONG><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><ABBR title="Named Entity Recognition">NER</ABBR><SPAN>&nbsp;</SPAN>for some common entity types, such as people, organizations, dates/times, etc.</LI> <LI><STRONG>Sentiment analysis</STRONG></LI> <LI><STRONG>Language Detection</STRONG></LI> <LI><STRONG>Entity Linking</STRONG>, by automatically adding internet links to some most common entities. This also performs<SPAN>&nbsp;</SPAN><STRONG>disambiguation</STRONG>, for example<SPAN>&nbsp;</SPAN><EM>Mars</EM><SPAN>&nbsp;</SPAN>can refer to both the planet or a chocolate bar, and correct link would be used depending on the context.</LI> </UL> <P>For example, let’s have a look at one medical paper abstract analyzed by Text Analytics:</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_0-1618309756290.png" style="width: 598px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272399iDC0B993F45A291BE/image-dimensions/598x136?v=v2" width="598" height="136" role="button" title="shwars_0-1618309756290.png" alt="shwars_0-1618309756290.png" /></span> <P>&nbsp;</P> <P>As you can see, some specific entities (for example, HCQ, which is short for hydroxychloroquine) are not recognized at all, while others are poorly categorized. Luckily, Microsoft provides special version of<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Text Analytics for Health</A>.</P> <H2>&nbsp;</H2> <H2 id="text-analytics-for-health">Text Analytics for Health</H2> <P>Text Analytics for Health is a cognitive service that exposes pre-trained PubMedBert model with some additional capabilities. Here is the result of extracting entities from the same piece of text using Text Analytics for Health:</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_1-1618309813758.png" style="width: 625px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272400iD71522A990F161E2/image-dimensions/625x180?v=v2" width="625" height="180" role="button" title="shwars_1-1618309813758.png" alt="shwars_1-1618309813758.png" /></span> <BLOCKQUOTE> <P>Currently, Text Analytics for Health is available as gated preview, meaning that you need to request access to use it in your specific scenario. This is done according to<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Ethical AI</A><SPAN>&nbsp;</SPAN>principles, to avoid irresponsible usage of this service for cases where human health depends on the result of this service. You can request access<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">here</A>.</P> </BLOCKQUOTE> <P>To perform analysis, we can use recent version<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Text Analytics Python SDK</A>, which we need to pip-install first:</P> <LI-CODE lang="bash">pip install azure.ai.textanalytics==5.1.0b5</LI-CODE> <BLOCKQUOTE> <P><STRONG>Note:</STRONG><SPAN>&nbsp;</SPAN>We need to specify a version of SDK, because otherwise we can have current non-beta version installed, which lacks Text Analytics for Health functionality.</P> </BLOCKQUOTE> <P>The service can analyze a bunch of<SPAN>&nbsp;</SPAN><STRONG>text documents</STRONG>, up to 10 at a time. You can pass either a list of documents, or dictionary. Provided we have a text of abstract in<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">txt</CODE><SPAN>&nbsp;</SPAN>variable, we can use the following code to analyze it:</P> <LI-CODE lang="python">poller = text_analytics_client.begin_analyze_healthcare_entities([txt]) res = list(poller.result()) print(res)</LI-CODE> <P>&nbsp;</P> <P>This results in the following object:</P> <PRE><CODE class="language-txt">[AnalyzeHealthcareEntitiesResultItem( id=0, entities=[ HealthcareEntity(text=2019, category=Time, subcategory=None, length=4, offset=20, confidence_score=0.85, data_sources=None, related_entities={HealthcareEntity(text=coronavirus disease pandemic, category=Diagnosis, subcategory=None, length=28, offset=25, confidence_score=0.98, data_sources=None, related_entities={}): 'TimeOfCondition'}), HealthcareEntity(text=coronavirus disease pandemic, category=Diagnosis, subcategory=None, length=28, offset=25, confidence_score=0.98, data_sources=None, related_entities={}), HealthcareEntity(text=COVID-19, category=Diagnosis, subcategory=None, length=8, offset=55, confidence_score=1.0, data_sources=[HealthcareEntityDataSource(entity_id=C5203670, name=UMLS), HealthcareEntityDataSource(entity_id=U07.1, name=ICD10CM), HealthcareEntityDataSource(entity_id=10084268, name=MDR), ... </CODE></PRE> <P>As you can see, in addition to just the list of entities, we also get the following:</P> <UL> <LI><STRONG>Enity Mapping</STRONG><SPAN>&nbsp;</SPAN>of entities to standard medical ontologies, such as<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain">UMLS</ABBR></A>.</LI> <LI><STRONG>Relations</STRONG><SPAN>&nbsp;</SPAN>between entities inside the text, such as<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">TimeOfCondition</CODE>, etc.</LI> <LI><STRONG>Negation</STRONG>, which indicated that an entity was used in negative context, for example<SPAN>&nbsp;</SPAN><EM>COVID-19 diagnosis did not occur</EM>.</LI> </UL> </DIV> <DIV>&nbsp;</DIV> <DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_2-1618309813783.png" style="width: 565px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272401iCED22C0254BFADF5/image-dimensions/565x195?v=v2" width="565" height="195" role="button" title="shwars_2-1618309813783.png" alt="shwars_2-1618309813783.png" /></span> <P>&nbsp;</P> <P>In addition to using Python SDK, you can also call Text Analytics using<SPAN>&nbsp;</SPAN><ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely">REST</ABBR><SPAN>&nbsp;</SPAN><ABBR title="Application Programming Interface">API</ABBR><SPAN>&nbsp;</SPAN>directly. This is useful if you are using a programming language that does not have a corresponding SDK, or if you prefer to receive Text Analytics result in the JSON format for further storage or processing. In Python, this can be easily done using<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">requests</CODE><SPAN>&nbsp;</SPAN>library:</P> <LI-CODE lang="python">uri = f"{endpoint}/text/analytics/v3.1-preview.3/entities/ health/jobs?model-version=v3.1-preview.4" headers = { "Ocp-Apim-Subscription-Key" : key } resp = requests.post(uri,headers=headers,data=doc) res = resp.json() if res['status'] == 'succeeded': result = t['results'] else: result = None</LI-CODE> <P><EM>(We need to make sure to use the preview endpoint to have access to text analytics for health)</EM></P> <P>Resulting JSON file will look like this:</P> <LI-CODE lang="json">{"id": "jk62qn0z", "entities": [ {"offset": 24, "length": 28, "text": "coronavirus disease pandemic", "category": "Diagnosis", "confidenceScore": 0.98, "isNegated": false}, {"offset": 54, "length": 8, "text": "COVID-19", "category": "Diagnosis", "confidenceScore": 1.0, "isNegated": false, "links": [ {"dataSource": "UMLS", "id": "C5203670"}, {"dataSource": "ICD10CM", "id": "U07.1"}, ... ]}, "relations": [ {"relationType": "Abbreviation", "bidirectional": true, "source": "#/results/documents/2/entities/6", "target": "#/results/documents/2/entities/7"}, ...], } </LI-CODE> <BLOCKQUOTE> <P><STRONG>Note:</STRONG><SPAN>&nbsp;</SPAN>In production, you may want to incorporate some code that will retry the operation when an error is returned by the service. For more guidance on proper implementation of cognitive services<SPAN>&nbsp;</SPAN><ABBR title="Representational State Transfer, an Internet protocol for making web services available remotely">REST</ABBR><SPAN>&nbsp;</SPAN>clients, you can<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">check source code</A><SPAN>&nbsp;</SPAN>of Azure Python SDK, or use<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Swagger</A><SPAN>&nbsp;</SPAN>to generate client code.</P> </BLOCKQUOTE> <H2>&nbsp;</H2> <H2 id="using-cosmosdb-to-store-analysis-result">Using Cosmos DB to Store Analysis Result</H2> <P>Using Python code similar to the one above we can extract JSON entity/relation metadata for each paper abstract. This process takes quite some time for 400K papers, and to speed it up it can be parallelized using technologies such as <A href="#" target="_self">Azure Batch</A> or<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Azure Machine Learning</A>. However, in my first experiment I just run the script on one VM in the cloud, and the data was ready in around 11 hours.</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_3-1618309813793.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272402i1F0BECF51E41517F/image-size/medium?v=v2&amp;px=400" role="button" title="shwars_3-1618309813793.png" alt="shwars_3-1618309813793.png" /></span> <P>&nbsp;</P> <P>Having done this, we have now obtained a collection of papers, each having a number of entities and corresponding relations. This structure is inherently hierarchical, and the best way to store and process it would be to use NoSQL approach for data storage. In Azure,<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Cosmos DB</A><SPAN>&nbsp;</SPAN>is a universal database that can store and query semi-structured data like our JSON collection, thus it would make sense to upload all JSON files to Cosmos DB collection. This can be done using the following code:</P> <LI-CODE lang="python">coscli = azure.cosmos.CosmosClient(cosmos_uri, credential=cosmos_key) cosdb = coscli.get_database_client("CORD") cospapers = cosdb.get_container_client("Papers") for x in all_papers_json: cospapers.upsert_item(x)</LI-CODE> <P>Here,<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">all_papers_json</CODE><SPAN>&nbsp;</SPAN>is a variable (or generator function) containing individual JSON documents for each paper. We also assume that you have created a Cosmos DB database called ‘CORD’, and obtained required credentials into<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">cosmos_uri</CODE><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">cosmos_key</CODE><SPAN>&nbsp;</SPAN>variables.</P> <P>After running this code, we will end up with the container<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">Papers</CODE><SPAN>&nbsp;</SPAN>will all metadata. We can now work with this container in Azure Portal by going to<SPAN>&nbsp;</SPAN><STRONG>Data Explorer</STRONG>:<SPAN>&nbsp;</SPAN></P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_4-1618309813810.png" style="width: 631px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272405i549D361D9A56057D/image-dimensions/631x284?v=v2" width="631" height="284" role="button" title="shwars_4-1618309813810.png" alt="shwars_4-1618309813810.png" /></span> <P>&nbsp;</P> <P>Now we can use<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Cosmos DB SQL</A><SPAN>&nbsp;</SPAN>in order to query our collection. For example, here is how we can obtain the list of all medications found in the corpus:</P> <LI-CODE lang="sql">-- unique medication names SELECT DISTINCT e.text FROM papers p JOIN e IN p.entities WHERE e.category='MedicationName'</LI-CODE> <P>&nbsp;Using SQL, we can formulate some very specific queries. Suppose, a medical specialist wants to find out all proposed dosages of a specific medication (say,<SPAN>&nbsp;</SPAN><STRONG>hydroxychloroquine</STRONG>), and see all papers that mention those dosages. This can be done using the following query:</P> <LI-CODE lang="sql">-- dosage of specific drug with paper titles SELECT p.title, r.source.text FROM papers p JOIN r IN p.relations WHERE r.relationType='DosageOfMedication' AND CONTAINS(r.target.text,'hydro')</LI-CODE> <P>You can execute this query interactively in Azure Portal, inside Cosmos DB Data Explorer. The result of the query looks like this:</P> <LI-CODE lang="json">[ { "title": "In Vitro Antiviral Activity and Projection of Optimized Dosing Design of Hydroxychloroquine for the Treatment of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)", "text": "400 mg" },{ "title": "In Vitro Antiviral Activity and Projection of Optimized Dosing Design of Hydroxychloroquine for the Treatment of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)", "text": "maintenance dose" },...]</LI-CODE> <P>A more difficult task would be to select all entities together with their corresponding ontology ID. This would be extremely useful, because eventually we want to be able to refer to a specific entity (<EM>hydroxychloroquine</EM>) regardless or the way it was mentioned in the paper (for example,<SPAN>&nbsp;</SPAN><EM>HCQ</EM><SPAN>&nbsp;</SPAN>also refers to the same medication). We will use<SPAN>&nbsp;</SPAN><ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain">UMLS</ABBR><SPAN>&nbsp;</SPAN>as our main ontology.</P> <LI-CODE lang="sql">--- get entities with UMLS IDs SELECT e.category, e.text, ARRAY (SELECT VALUE l.id FROM l IN e.links WHERE l.dataSource='UMLS')[0] AS umls_id FROM papers p JOIN e IN p.entities</LI-CODE> <H2>&nbsp;</H2> <H2 id="creating-interactive-dashboards">Creating Interactive Dashboards</H2> <P>While being able to use SQL query to obtain an answer to some specific question, like medication dosages, seems like a very useful tool - it is not convenient for non-IT professionals, who do not have high level of SQL mastery. To make the collection of metadata accessible to medical professionals, we can use<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">PowerBI</A><SPAN>&nbsp;</SPAN>tool to create an interactive dashboard for entity/relation exploration.</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_5-1618309813826.png" style="width: 597px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272404iF8211DD19E119DA0/image-dimensions/597x533?v=v2" width="597" height="533" role="button" title="shwars_5-1618309813826.png" alt="shwars_5-1618309813826.png" /></span> <P>&nbsp;</P> <P>In the example above, you can see a dashboard of different entities. One can select desired entity type on the left (eg.<SPAN>&nbsp;</SPAN><STRONG>Medication Name</STRONG><SPAN>&nbsp;</SPAN>in our case), and observe all entities of this type on the right, together with their count. You can also see associated<SPAN>&nbsp;</SPAN><ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain">UMLS</ABBR><SPAN>&nbsp;</SPAN>IDs in the table, and from the example above once can notice that several entities can refer to the same ontology ID (<EM>hydroxychloroquine</EM><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><EM>HCQ</EM>).</P> <P>To make this dashboard, we need to use<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">PowerBI Desktop</A>. First we need to import Cosmos DB data - the tools support direct import of data from Azure.</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_6-1618309813830.png" style="width: 551px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272403iCEB41BA5795F57A2/image-dimensions/551x617?v=v2" width="551" height="617" role="button" title="shwars_6-1618309813830.png" alt="shwars_6-1618309813830.png" /></span> <P>Then we provide SQL query to get all entities with the corresponding<SPAN>&nbsp;</SPAN><ABBR title="Unified Medical Language System - one of standard ontologies used in medical domain">UMLS</ABBR><SPAN>&nbsp;</SPAN>IDs - the one we have shown above - and one more query to display all unique categories. Then we drag those two tables to the PowerBI canvas to get the dashboard shown above. The tool automatically understands that two tables are linked by one field named<SPAN>&nbsp;</SPAN><STRONG>category</STRONG>, and supports the functionality to filter second table based on the selection in the first one.</P> <P>Similarly, we can create a tool to view relations:</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_7-1618309813835.png" style="width: 567px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272406iB6EE14130450782C/image-dimensions/567x449?v=v2" width="567" height="449" role="button" title="shwars_7-1618309813835.png" alt="shwars_7-1618309813835.png" /></span> <P>&nbsp;</P> <P>From this tool, we can make queries similar to the one we have made above in SQL, to determine dosages of a specific medications. To do it, we need to select<SPAN>&nbsp;</SPAN><STRONG>DosageOfMedication</STRONG><SPAN>&nbsp;</SPAN>relation type in the left table, and then filter the right table by the medication we want. It is also possible to create further drill-down tables to display specific papers that mention selected dosages of medication, making this tool a useful research instrument for medical scientist.</P> <H2>&nbsp;</H2> <H2 id="getting-automatic-insights">Getting Automatic Insights</H2> <P>The most interesting part of the story, however, is to draw some automatic insights from the text, such as the change in medical treatment strategy over time. To do this, we need to write some more code in Python to do proper data analysis. The most convenient way to do that is to use<SPAN>&nbsp;</SPAN><STRONG>Notebooks embedded into Cosmos DB</STRONG>:</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_8-1618309813841.png" style="width: 622px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272408i3CBE604E97DB3587/image-dimensions/622x248?v=v2" width="622" height="248" role="button" title="shwars_8-1618309813841.png" alt="shwars_8-1618309813841.png" /></span> <P>&nbsp;</P> <P>Those notebooks support embedded SQL queries, thus we are able to execute SQL query, and then get the results into Pandas DataFrame, which is Python-native way to explore data:</P> <LI-CODE lang="sql">%%sql --database CORD --container Papers --output meds SELECT e.text, e.isNegated, p.title, p.publish_time, ARRAY (SELECT VALUE l.id FROM l IN e.links WHERE l.dataSource='UMLS')[0] AS umls_id FROM papers p JOIN e IN p.entities WHERE e.category = 'MedicationName'</LI-CODE> <DIV class="language-sql highlighter-rouge"> <DIV class="highlight">&nbsp;<SPAN style="font-family: inherit;">Here we end up with</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">meds</CODE><SPAN style="font-family: inherit;">&nbsp;</SPAN><SPAN style="font-family: inherit;">DataFrame, containing names of medicines, together with corresponding paper titles and publishing date. We can further group by ontology ID to get frequencies of mentions for different medications:</SPAN></DIV> <DIV class="highlight"><LI-CODE lang="python">unimeds = meds.groupby('umls_id') \ .agg({'text' : lambda x : ','.join(x), 'title' : 'count', 'isNegated' : 'sum'}) unimeds['negativity'] = unimeds['isNegated'] / unimeds['title'] unimeds['name'] = unimeds['text'] \ .apply(lambda x: x if ',' not in x else x[:x.find(',')]) unimeds.sort_values('title',ascending=False).drop('text',axis=1)</LI-CODE></DIV> </DIV> <DIV class="language-python highlighter-rouge"> <DIV class="highlight">&nbsp;<SPAN style="font-family: inherit;">This gives us the following table:</SPAN></DIV> </DIV> <TABLE> <THEAD> <TR> <TH>umls_id</TH> <TH>title</TH> <TH>isNegated</TH> <TH>negativity</TH> <TH>name</TH> </TR> </THEAD> <TBODY> <TR> <TD>C0020336</TD> <TD>4846</TD> <TD>191</TD> <TD>0.039414</TD> <TD>hydroxychloroquine</TD> </TR> <TR> <TD>C0008269</TD> <TD>1870</TD> <TD>38</TD> <TD>0.020321</TD> <TD>chloroquine</TD> </TR> <TR> <TD>C1609165</TD> <TD>1793</TD> <TD>94</TD> <TD>0.052426</TD> <TD>Tocilizumab</TD> </TR> <TR> <TD>C4726677</TD> <TD>1625</TD> <TD>24</TD> <TD>0.014769</TD> <TD>remdesivir</TD> </TR> <TR> <TD>C0052796</TD> <TD>1201</TD> <TD>84</TD> <TD>0.069942</TD> <TD>azithromycin</TD> </TR> <TR> <TD>…</TD> <TD>…</TD> <TD>…</TD> <TD>…</TD> <TD>…</TD> </TR> <TR> <TD>C0067874</TD> <TD>1</TD> <TD>0</TD> <TD>0.000000</TD> <TD>1-butanethiol</TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>From this table, we can select the top-15 most frequently mentioned medications:</P> <LI-CODE lang="python">top = { x[0] : x[1]['name'] for i,x in zip(range(15), unimeds.sort_values('title',ascending=False).iterrows()) }</LI-CODE> <P>To see how frequency of mentions for medications changed over time, we can average out the number of mentions for each month:</P> <LI-CODE lang="python"># First, get table with only top medications imeds = meds[meds['umls_id'].apply(lambda x: x in top.keys())].copy() imeds['name'] = imeds['umls_id'].apply(lambda x: top[x]) # Create a computable field with month imeds['month'] = imeds['publish_time'].astype('datetime64[M]') # Group by month medhist = imeds.groupby(['month','name']) \ .agg({'text' : 'count', 'isNegated' : [positive_count,negative_count] })</LI-CODE> <DIV class="language-python highlighter-rouge"> <DIV class="highlight"><SPAN style="font-family: inherit;">This gives us the DataFrame that contains number of positive and negative mentions of medications for each month. From there, we can plot corresponding graphs using</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">matplotlib</CODE><SPAN style="font-family: inherit;">:</SPAN></DIV> <DIV class="highlight"><LI-CODE lang="python">medh = medhist.reset_index() fig,ax = plt.subplots(5,3) for i,n in enumerate(top.keys()): medh[medh['name']==top[n]] \ .set_index('month')['isNegated'] \ .plot(title=top[n],ax=ax[i//3,i%3]) fig.tight_layout()</LI-CODE></DIV> </DIV> <DIV class="language-python highlighter-rouge"> <DIV class="highlight">&nbsp;</DIV> </DIV> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_9-1618309813852.png" style="width: 636px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272407iE9F521F29AE64C09/image-dimensions/636x259?v=v2" width="636" height="259" role="button" title="shwars_9-1618309813852.png" alt="shwars_9-1618309813852.png" /></span> <H2>&nbsp;</H2> <H2 id="visualizing-terms-co-occurrence">Visualizing Terms Co-Occurrence</H2> <P>Another interesting insight is to observe which terms occur frequently together. To visualize such dependencies, there are two types of diagrams:</P> <UL> <LI><STRONG>Sankey diagram</STRONG><SPAN>&nbsp;</SPAN>allows us to investigate relations between two types of terms, eg. diagnosis and treatment</LI> <LI><STRONG>Chord diagram</STRONG><SPAN>&nbsp;</SPAN>helps to visualize co-occurrence of terms of the same type (eg. which medications are mentioned together)</LI> </UL> <P>To plot both diagrams, we need to compute<SPAN>&nbsp;</SPAN><STRONG>co-occurrence matrix</STRONG>, which in the row<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">i</CODE><SPAN>&nbsp;</SPAN>and column<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">j</CODE><SPAN>&nbsp;</SPAN>contains number of co-occurrences of terms<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">i</CODE><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">j</CODE><SPAN>&nbsp;</SPAN>in the same abstract (one can notice that this matrix is symmetric). The way we compute it is to manually select relatively small number of terms for our ontology, grouping some terms together if needed:</P> <LI-CODE lang="python">treatment_ontology = { 'C0042196': ('vaccination',1), 'C0199176': ('prevention',2), 'C0042210': ('vaccines',1), ... } diagnosis_ontology = { 'C5203670': ('COVID-19',0), 'C3714514': ('infection',1), 'C0011065': ('death',2), 'C0042769': ('viral infections',1), 'C1175175': ('SARS',3), 'C0009450': ('infectious disease',1), ...}</LI-CODE> <DIV class="language-python highlighter-rouge"> <DIV class="highlight"><SPAN style="font-family: inherit;">Then we define a function to compute co-occurrence matrix for two categories specified by those ontology dictionaries:</SPAN></DIV> <DIV class="highlight"><LI-CODE lang="python">def get_matrix(cat1, cat2): d1 = {i:j[1] for i,j in cat1.items()} d2 = {i:j[1] for i,j in cat2.items()} s1 = set(cat1.keys()) s2 = set(cat2.keys()) a = np.zeros((len(cat1),len(cat2))) for i in all_papers: ent = get_entities(i) for j in ent &amp; s1: for k in ent &amp; s2 : a[d1[j],d2[k]] += 1 return a</LI-CODE></DIV> </DIV> <DIV class="language-python highlighter-rouge"> <DIV class="highlight">&nbsp;<SPAN style="font-family: inherit;">Here</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">get_entities</CODE><SPAN style="font-family: inherit;">&nbsp;</SPAN><SPAN style="font-family: inherit;">function returns the list of</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><ABBR style="font-family: inherit;" title="Unified Medical Language System - one of standard ontologies used in medical domain">UMLS</ABBR><SPAN style="font-family: inherit;">&nbsp;</SPAN><SPAN style="font-family: inherit;">IDs for all entities mentioned in the paper, and</SPAN><SPAN style="font-family: inherit;">&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">all_papers</CODE><SPAN style="font-family: inherit;">&nbsp;</SPAN><SPAN style="font-family: inherit;">is the generator that returns the complete list of paper abstracts metadata.</SPAN></DIV> </DIV> <P>To actually plot the Sankey diagram, we can use<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Plotly</A><SPAN>&nbsp;</SPAN>graphics library. This process is well described<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">here</A>, so I will not go into further details. Here are the results:</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_10-1618309813867.png" style="width: 657px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272411i9CA3DE07AC0D98A4/image-dimensions/657x422?v=v2" width="657" height="422" role="button" title="shwars_10-1618309813867.png" alt="shwars_10-1618309813867.png" /></span><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_11-1618309813875.png" style="width: 657px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272410i9BA843A2C8DAEE3A/image-dimensions/657x422?v=v2" width="657" height="422" role="button" title="shwars_11-1618309813875.png" alt="shwars_11-1618309813875.png" /></span> <P>Plotting a chord diagram cannot be easily done with Plotly, but can be done with a different library -<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener">Chord</A>. The main idea remains the same - we build co-occurrence matrix using the same function described above, passing the same ontology twice, and then pass this matrix to<SPAN>&nbsp;</SPAN><CODE class="language-plaintext highlighter-rouge">Chord</CODE>:</P> <LI-CODE lang="python">def chord(cat): matrix = get_matrix(cat,cat) np.fill_diagonal(matrix,0) names = cat.keys() Chord(matrix.tolist(), names, font_size = "11px").to_html()</LI-CODE> <DIV class="language-python highlighter-rouge"> <DIV class="highlight">&nbsp;<SPAN style="font-family: inherit;">The results of chord diagrams for treatment types and medications are below:</SPAN></DIV> <DIV class="highlight">&nbsp;</DIV> </DIV> <TABLE> <TBODY> <TR> <TD><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_12-1618309813883.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272409iF91465DB62F52534/image-size/medium?v=v2&amp;px=400" role="button" title="shwars_12-1618309813883.png" alt="shwars_12-1618309813883.png" /></span> <P>&nbsp;</P> </TD> <TD><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shwars_13-1618309813895.png" style="width: 400px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/272412iF38991534E116039/image-size/medium?v=v2&amp;px=400" role="button" title="shwars_13-1618309813895.png" alt="shwars_13-1618309813895.png" /></span> <P>&nbsp;</P> </TD> </TR> <TR> <TD>Treatment types</TD> <TD>Medications</TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>Diagram on the right shows which medications are mentioned together (in the same abstract). We can see that well-known combinations, such as<SPAN>&nbsp;</SPAN><STRONG>hydroxychloroquine + azitromycin</STRONG>, are clearly visible.</P> <H2>&nbsp;</H2> <H2 id="conclusion">Conclusion</H2> <P>In this post, we have described the architecture of a proof-of-concept system for knowledge extraction from large corpora of medical texts. We use Text Analytics for Health to perform the main task of extracting entities and relations from text, and then a number of Azure services together to build a query took for medical scientist and to extract some visual insights. This post is quite conceptual at the moment, and the system can be further improved by providing more detailed drill-down functionality in PowerBI module, as well as doing more data exploration on extracted entity/relation collection. It would also be interesting to switch to processing full-text articles as well, in which case we need to think about slightly different criteria for co-occurrence of terms (eg. in the same paragraph vs. the same paper).</P> <P>The same approach can be applied in other scientific areas, but we would need to be prepared to train a custom neural network model to perform entity extraction. This task has been briefly outlined above (when we talked about the use of<SPAN>&nbsp;</SPAN><ABBR title="Bidirectional Encoder Representations from Transformers - relatively modern language model">BERT</ABBR>), and I will try to focus on it in one of my next posts. Meanwhile, feel free to reach out to me if you are doing similar research, or have any specific questions on the code and/or methodology.</P> <P>&nbsp;</P> </DIV> </DIV> Tue, 13 Apr 2021 19:42:11 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/analyzing-covid-medical-papers-with-azure-and-text-analytics-for/ba-p/2269890 shwars 2021-04-13T19:42:11Z Learn about Bot Framework Composer’s new authoring experience and deploy your bot to a telephone https://gorovian.000webhostapp.com/?exam=t5/azure-ai/learn-about-bot-framework-composer-s-new-authoring-experience/ba-p/2269739 <P><SPAN data-contrast="auto">Customer expectations continue to increase,&nbsp;looking for&nbsp;immediate response and rapid issue resolution, across multiple&nbsp;channels&nbsp;24/7.&nbsp;Nowhere is this more apparent than the contact center, with this&nbsp;landscape&nbsp;is&nbsp;driving the need for&nbsp;efficiencies, such as reducing&nbsp;call&nbsp;handling times&nbsp;and increasing call deflection rates&nbsp;– all whilst aiming to deliver a&nbsp;personalized and tailored&nbsp;customer experience.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">To help&nbsp;respond to this need,&nbsp;we announced&nbsp;the public preview of the telephony channel for Azure Bot Service&nbsp;in February 2021,&nbsp;expanding&nbsp;the already significant number of touch points&nbsp;offered by the service, to include&nbsp;this&nbsp;increasingly&nbsp;critical method of communication.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="auto">Built on&nbsp;state-of-the-art speech&nbsp;services</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="auto">The&nbsp;new telephony channel, combined with our&nbsp;Bot Framework&nbsp;developer&nbsp;platform,&nbsp;</SPAN><SPAN data-contrast="none">makes it easy to&nbsp;rapidly&nbsp;build </SPAN><I><SPAN data-contrast="none">always-available </SPAN></I><SPAN data-contrast="none">virtual&nbsp;assistants, or IVR assistants,&nbsp;that provide&nbsp;natural language&nbsp;intent-based call handling&nbsp;and the ability to&nbsp;handle advanced conversation&nbsp;flows, such as context switching&nbsp;and&nbsp;responding to&nbsp;follow up questions&nbsp;and still meeting the&nbsp;goal of&nbsp;reducing operational costs for enterprises.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">This new capability&nbsp;combines several of our&nbsp;Azure&nbsp;and AI services, including&nbsp;our </SPAN><I><SPAN data-contrast="none">state-of-the-art </SPAN></I><SPAN data-contrast="none">Cognitive Speech Service,&nbsp;enabling fluid, natural-sounding speech that matches the patterns and intonation of human voices&nbsp;through&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Azure Text-to-Speech neural voices</SPAN></A><SPAN data-contrast="none">,&nbsp;with&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Azure Communications Services</SPAN></A><SPAN data-contrast="none">&nbsp;powering&nbsp;various&nbsp;calling&nbsp;capabilities.&nbsp;</SPAN><SPAN data-contrast="auto">The channel also&nbsp;provides&nbsp;support&nbsp;for&nbsp;full duplex conversations&nbsp;and&nbsp;streaming audio over PSTN, support for DTMF,&nbsp;barge-in&nbsp;(allowing a caller to interrupt the virtual&nbsp;assistant)&nbsp;and more.&nbsp;Follow our roadmap and try out one of our samples on the&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Telephony channel GitHub repository</SPAN></A><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="none">Improving our Conversational AI SDK and tools for&nbsp;speech experiences</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-contrast="none">To&nbsp;compliment the introduction of the telephony channel and ensure our customers can create industry leading experiences, we have&nbsp;added new features to Bot Framework Composer,&nbsp;an&nbsp;open-source&nbsp;conversational&nbsp;authoring&nbsp;tool, featuring a visual canvas,&nbsp;built on top of the Bot Framework SDK,&nbsp;allowing you&nbsp;to extend and customize the conversation with code and pre-built components.&nbsp; Updates to Composer to support speech experiences include,</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">The ability to add tailored speech responses&nbsp;in seconds, either for a voice only or multi-modal (text and speech)&nbsp;agent.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:360,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">Addition of global application settings for your bot, allowing you to set a consistent voice font to be used on speech enabled channels, including taking care of setting the required base SSML tags.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:360,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="7" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><SPAN data-contrast="auto">Authoring UI&nbsp;helpers that allow you to&nbsp;add additional&nbsp;common SSML (</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Speech&nbsp;Synthesis&nbsp;Markup Language</SPAN></A><SPAN data-contrast="auto">)&nbsp;tags to control the intonation, speed and even the style of the voice used,&nbsp;including new styles available for our&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">neural voice fonts</SPAN></A><SPAN data-contrast="auto">, such as&nbsp;a dedicated Customer Service style.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559685&quot;:360,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> <P><STRONG><SPAN data-contrast="auto">Comprehensive Contact Center solution through Dynamics 365</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="auto">Microsoft announced&nbsp;the expansion of Microsoft Dynamics 365 Customer Service omnichannel capabilities to include a new voice channel,&nbsp;that is built on this telephony channel&nbsp;infrastructure.&nbsp;With&nbsp;native&nbsp;voice, businesses receive seamless, end-to-end&nbsp;experiences within a single solution, ensuring consistent, personalized, and connected support across all channels of engagement.&nbsp;This&nbsp;new voice channel for Customer Service enables an all-in-one customer service solution without fragmentation or manual data integration&nbsp;required, and&nbsp;enables a faster time to value.&nbsp;Learn&nbsp;more&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">here</SPAN></A><SPAN data-contrast="auto">.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:0,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="auto">Get started building for telephony!</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> <UL> <LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="1" data-aria-level="1"><SPAN data-contrast="auto">Sign up for&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN>Azure trial</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="2" data-aria-level="1"><SPAN data-contrast="auto">Join&nbsp;us on <A href="#" target="_self">live stream of AI Show</A></SPAN><SPAN data-contrast="auto">&nbsp;on 4/16 11AM&nbsp;PDT</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="3" data-aria-level="1"><SPAN data-contrast="auto">Sign up for&nbsp;</SPAN><A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai-ama/bd-p/AzureAIAMA" target="_blank" rel="noopener"><SPAN>conversational AI Ask Microsoft Anything (4/28)</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="4" data-aria-level="1"><SPAN data-contrast="auto">To&nbsp;get started&nbsp;developing a virtual agent, that you can surface via the new telephony channel today, download&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">Bot Framework Composer</SPAN></A><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> <LI data-leveltext="" data-font="Symbol" data-listid="10" aria-setsize="-1" data-aria-posinset="5" data-aria-level="1"><SPAN data-contrast="auto">Read more about the telephony channel&nbsp;preview,&nbsp;including&nbsp;documentation and samples, visit the Bot Framework telephony channel&nbsp;</SPAN><A href="#" target="_blank" rel="noopener"><SPAN data-contrast="none">GitHub&nbsp;repository</SPAN></A><SPAN data-contrast="auto">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></LI> </UL> Wed, 14 Apr 2021 16:41:33 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/learn-about-bot-framework-composer-s-new-authoring-experience/ba-p/2269739 KelvinChen 2021-04-14T16:41:33Z Introducing Multivariate Anomaly Detection https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679 <P>Microsoft partners and customers have been building metrics monitoring solutions for AIOps and predictive maintenance, by leveraging the easy-to-use time-series anomaly detection Cognitive Service: Anomaly Detector. Because of its ability to analyze time-series individually, Anomaly Detector is benefiting the industry with its simplicity and scalability.</P> <H2>&nbsp;</H2> <H2>What's new</H2> <P>We are pleased to announce the new multi-variate capability of Anomaly Detector. The new multivariate anomaly detection APIs in Anomaly Detector further enable developers to easily integrate advanced AI of detecting anomalies from groups of metrics into their applications without the need for machine learning knowledge or labeled data. Dependencies and inter-correlations between different signals are now counted as key factors. The new feature protects your mission-critical systems and physical assets, such as software applications, servers, factory machines, spacecraft, or even your business, from failures with a holistic view.</P> <P>Imagine 20 sensors from an auto engine generating 20 different signals, e.g., vibration, temperature, etc. The readings of those signals individually may not tell you much on system-level issues, but together, could represent the health of the engine. When the synergy of those signals turns odd, the multivariate anomaly detection feature can sense the anomaly like a seasoned floor expert. Moreover, the AI models are trained and customized for your data such that it understands your business. With the new APIs in Anomaly Detector, developers can now easily integrate the multivariate time-series anomaly detection capabilities as well as the interpretability of the anomalies into predictive maintenance solutions, or AIOps monitoring solutions for complex enterprise software, or business intelligence tools.</P> <H2>&nbsp;</H2> <H2>Customer love</H2> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Siemens.png" style="width: 197px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270943iF799F4859624796C/image-size/small?v=v2&amp;px=200" role="button" title="Siemens.png" alt="Siemens.png" /></span></P> <P>“Medical device production demands unprecedented precision. For this reason, the Siemens Healthineers team uses Multivariate Anomaly Detector (MVAD) in medical device stress tests during the final inspection in the production. We found MVAD easy to use and work almost out of the box with promising performance. With the ready-to-use model, we don't need to develop a custom AD model, which ensures a short time to market. We plan to expand this technology also to other use cases. It is made easy due to good integration into our ML platform and processes.” - Dr. Jens Fürst, Head Digitalization and Automation at Siemens Healthineers</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Airbus.jpg" style="width: 200px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270947iB01CC75155D882ED/image-size/small?v=v2&amp;px=200" role="button" title="Airbus.jpg" alt="Airbus.jpg" /></span></P> <P>To better understand the health and condition of the aircraft and foresee and fix potential problems before they occur, Airbus deployed Anomaly Detector, part of Cognitive Services, to gather and analyze the telemetry data. It began as a proof of concept of the aircraft-monitoring application by loading telemetry data from multiple flights for analysis and model training. “Early tests have shown that for many cases, the out-of-the-box solution works beautifully, which helps us deploy our solutions faster. I would say that we save up to three months on development for our smaller use cases with Anomaly Detector.” <BR />Marcel Rummens: Product Owner of Internal AI Platform, Airbus</P> <H2>&nbsp;</H2> <H2>AI horsepower</H2> <P>Time-series anomaly detection is an important research topic in data mining and has a wide range of applications in the industry. Efficient and accurate anomaly detection helps companies to monitor their key metrics continuously and alert for potential incidents on time. In many real-world applications like predictive maintenance and SpaceOps, multiple time-series metrics are collected to reflect the health status of a system. Univariate time-series anomaly detection algorithms can find anomalies for a single metric. However, it could be problematic in deciding whether the whole system is running normally. For example, sudden changes of a certain metric do not necessarily mean failures of the system. As shown in Figure 1, there are obvious boosts in the volume of TIMESERIES RECEIVED and DATA RECEIVED ON FLINK in the green segment, but the system is still in a healthy state as these two features share a consistent tendency. However, in the red segment, GC shows an inconsistent pattern with other metrics, indicating a problem in garbage collection. Consequently, it is essential to take the correlations between different time series into consideration in a multivariate time-series anomaly detection system.<span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="figure1.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270976iC877B155C1E6BCA6/image-size/large?v=v2&amp;px=999" role="button" title="figure1.png" alt="Fig.1" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig.1</span></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>In this newly introduced feature, we productized a novel framework — MTAD-GAT (Multivariate Time-series Anomaly Detection via Graph Attention Network), to tackle the limitations of previous solutions. Our method considers each univariate time-series as an individual feature and tries to model the correlations between different features explicitly, while the temporal dependencies within each time-series are modeled at the same time. The key ingredients in our model are two graph attention layers, namely the feature-oriented graph attention layer and the time-oriented graph attention layer. The feature-oriented graph attention layer captures the causal relationships between multiple features, and the time-oriented graph attention layer underlines the dependencies along the temporal dimension. In addition, we jointly train a forecasting-based model and a reconstruction-based model for better representations of time-series data. The two models can be optimized simultaneously by a joint objective function.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="maga.png" style="width: 624px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270978iFB7524395292661F/image-size/large?v=v2&amp;px=999" role="button" title="maga.png" alt="maga.png" /></span></P> <P>The magic behind the scenes can be summarized as follows:</P> <UL> <LI>A novel framework to solve the multivariate time-series anomaly detection problem in a self-supervised manner. Our model shows superior performances on two public datasets and establishes state-of-the-art scores in the literature.&nbsp;</LI> <LI>For the first time, we leverage two parallel graph attention (GAT) layers to learn the relationships between different time-series and timestamps dynamically. Especially, our model captures the correlations between different time-series successfully without any prior knowledge.</LI> <LI>We integrate the advantages of both forecasting-based and reconstruction-based models by introducing a joint optimization target. The forecasting-based model focuses on single-timestamp prediction, while the reconstruction-based model learns a latent representation of the entire time-series.</LI> <LI>Our network has good interpretability. We analyze the attention scores of multiple time-series learned by the graph attention layers, and the results correspond reasonably well to human intuition. We also show its capability of anomaly diagnosis.</LI> </UL> <H2>&nbsp;</H2> <H2>Multivariate anomaly detection API overview</H2> <P>This new feature has a different workflow compared with the existing univariate feature. There are two phases to obtain the detection results, the training phase, and the inference phase. In the training phase, you need to provide some historical data to let the model learn past patterns. Then in the inference phase, you can call the inference API to acquire detection results of multivariate time-series in a given range.</P> <TABLE width="691"> <TBODY> <TR> <TD width="299"> <P><STRONG>APIs</STRONG></P> </TD> <TD width="392"> <P><STRONG>Functionality</STRONG></P> </TD> </TR> <TR> <TD width="299"> <P>/multivariate/models</P> </TD> <TD width="392"> <P>Create and train model using training data</P> </TD> </TR> <TR> <TD width="299"> <P>/multivariate/models/{modelid}</P> </TD> <TD width="392"> <P>Get model info including training status and parameters used in the model</P> </TD> </TR> <TR> <TD width="299"> <P>multivariate/models[?$skip][&amp;$top]</P> </TD> <TD width="392"> <P>List models of a subscription</P> </TD> </TR> <TR> <TD width="299"> <P>/multivariate/models/{modelid}/detect</P> </TD> <TD width="392"> <P>Submit inference task with user's data, this is async</P> </TD> </TR> <TR> <TD width="299"> <P>/multivariate/results/{resultid}</P> </TD> <TD width="392"> <P>Get anomalies + root causes (the contribution scores of each variate for each incident)</P> </TD> </TR> <TR> <TD width="299"> <P>multivariate/models/{modelId}</P> </TD> <TD width="392"> <P>Delete an existing multivariate model according to the modelId</P> </TD> </TR> <TR> <TD width="299"> <P>multivariate/models/{modelId}/export</P> </TD> <TD width="392"> <P>Export Multivariate Anomaly Detection Model as Zip file</P> </TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H2>Get started!</H2> <UL> <LI><A href="#" target="_blank" rel="noopener">Learning more from our documentation</A></LI> <LI>QuickStarts: <A href="#" target="_blank" rel="noopener">C#,</A> <A href="#" target="_blank" rel="noopener">Python</A>, <A href="#" target="_blank" rel="noopener">JavaScript</A>, <A href="#" target="_blank" rel="noopener">Java</A></LI> <LI><A href="#" target="_self">Artificial Intelligence for developers</A></LI> </UL> <P>&nbsp;</P> Mon, 12 Apr 2021 15:11:56 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/introducing-multivariate-anomaly-detection/ba-p/2260679 Tony_Xing 2021-04-12T15:11:56Z Supercharge Azure ML code development with new VS Code integration https://gorovian.000webhostapp.com/?exam=t5/azure-ai/supercharge-azure-ml-code-development-with-new-vs-code/ba-p/2260129 <P><EM>This post is co-authored by Abe Omorogbe, Program Manager, Azure Machine Learning.</EM></P> <P>&nbsp;</P> <P>The Azure Machine Learning (Azure ML) team is excited to announce the release of an enhanced developer experience for ‘compute instance’ and ‘notebooks’ users, through a VS Code integration in the Azure ML Studio! It is now easier than ever to work directly on your Azure ML compute instances from within Visual Studio Code, <STRONG>,</STRONG> and with full access to a remote terminal, your favorite VS Code extensions, Git source control UI, and a debugger.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="vscode-small.gif" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270923i4A4EF83FBEBCE7EB/image-size/large?v=v2&amp;px=999" role="button" title="vscode-small.gif" alt="vscode-small.gif" /></span></P> <P>&nbsp;</P> <H2>Bringing VS Code to Azure Machine Learning</H2> <P>The Azure Machine Learning and VS Code teams have been working in collaboration over the past couple of months to better understand user workflows for authoring, editing, and managing code files. The demand for VS Code became clear after speaking to a wide variety of users tasked with managing larger projects and operationalizing their models. Users were eager to continue working on their Azure ML compute resources and retain the development context initially defined through the Studio UI.</P> <P>&nbsp;</P> <P>The first step to enabling a better editing experience for users was to evaluate what was currently used in VS Code. Users were familiar with extensions such as <A href="#" target="_blank" rel="noopener">Remote-SSH</A> and , the former used to connect to their remote compute and the latter to author notebook files. The advantage of using Jupyter, JupyterLab, or <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906" target="_blank" rel="noopener">Azure ML notebooks</A> was that they could be used for all compute instance types without requiring any additional configuration or networking changes.</P> <P>&nbsp;</P> <P>To enable users to work against their compute instances without requiring SSH or additional networking changes, the Azure ML and VS Code teams built a <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/power-your-vs-code-notebooks-with-azml-compute-instances/ba-p/1629630" target="_blank" rel="noopener">Notebook-specific compute instance connect experience</A>. The Azure ML extension was responsible for facilitating the connection between VS Code – Jupyter and the compute instance, taking care of authenticating on the user’s behalf. After a month or so of releasing this capability, it was clear that users were excited about connectivity without SSH and being able to work from directly within VS Code. However, working in the editor implied expectations around being able to use other VS code features such as the remote terminal, debugger, and language server. Users expressed their frustration with being limited to working in a single Notebook file, being unable to view files on the remote server, and not being able to use their preferred extensions.</P> <P>&nbsp;</P> <H2>VS Code Integration: Features</H2> <P>Learning from prior releases and talking to users led the Azure ML and VS code teams, to build a <STRONG>complete VS Code experience</STRONG> for compute instances&nbsp;<STRONG>without using SSH</STRONG>. Getting started with this experience is trivial – entry points have been integrated within the <A href="#" target="_blank" rel="noopener">Azure ML Studio</A> in both the Compute Instance and Notebooks tabs.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="compute-entry-point.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270905i713BCB471336A361/image-size/large?v=v2&amp;px=999" role="button" title="compute-entry-point.png" alt="Studio UI Compute Entry Point" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Studio UI Compute Entry Point</span></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="notebooks-entry-point.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270907i6BE6C0DEA9E84831/image-size/large?v=v2&amp;px=999" role="button" title="notebooks-entry-point.png" alt="Studio UI Notebooks Entry Point" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Studio UI Notebooks Entry Point</span></span></P> <P>&nbsp;</P> <P>Through this VS Code integration customers will now have access to the following features and benefits:</P> <UL> <LI><STRONG>Full integration with <A href="#" target="_self">Azure ML file share and notebooks</A>:</STRONG> All file operations in VS Code are fully synced with the Azure ML Studio. For example, if a user drags and drops files from their local machine into VS Code connected to Azure ML, all files will be synced and appear in the Azure ML Studio.</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Git UI Experiences</STRONG></A><STRONG>:</STRONG> Fully manage Git repos in Azure ML with the rich VS Code source control UI.</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Notebook Editor</STRONG></A>: Seamlessly click out from the Azure ML notebooks and continue to work on notebooks in the new native VS code editor.</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>Debugging</STRONG></A><STRONG>:</STRONG> Use the native debugging in VS Code to debug any training script before submitting it to an Azure ML cluster for batch training.</LI> <LI><A href="#" target="_blank" rel="noopener"><STRONG>VS Code Terminal</STRONG></A><STRONG>:</STRONG> Work in the VS Code terminal that is fully connected to the compute instance.</LI> <LI><STRONG><A href="#" target="_self">VS Code Extension Support</A>:</STRONG> All VS Code extensions are fully supported in VS Code connected to the compute instance.</LI> <LI><STRONG style="font-family: inherit;"><A href="#" target="_self">Enterprise Support</A>:</STRONG><SPAN style="font-family: inherit;"> Work with VS Code securely in private endpoints without additional, complicated SSH and networking configuration. AAD credentials and RBAC are used to establish a secure connection to VNET/private link enabled Azure ML workspaces.</SPAN></LI> </UL> <H2>VS Code Integration: How it Works</H2> <P>Clicking out to VS Code will launch a desktop VS Code session which initiates a secondary remote connection to the target compute. Within the remote connection window, the Azure ML extension creates a WebSocket connection between your local VS Code client and the remote compute instance.</P> <P>The connected window now provides you with:</P> <OL> <LI>Access to the mounted file share, with consistent syncing between what is seen in Jupyter* and the Azure ML Notebooks experience.</LI> <LI>Access to the machine’s local SSD in case you would like to clone and manage repos outside of the shared file share.</LI> <LI>The ability to manage repositories through the source control UI.</LI> <LI>The ability to create, interact and debug running applications.</LI> <LI>A remote terminal for executing commands directly against the remote compute.</LI> </OL> <P>Below is a high-level overview of the remote connection</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="remote-connect-hl-arch.png" style="width: 624px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270909iF0AB1D7C0143EE92/image-size/large?v=v2&amp;px=999" role="button" title="remote-connect-hl-arch.png" alt="Remote Connection Architecture Diagram (High-Level)" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Remote Connection Architecture Diagram (High-Level)</span></span></P> <P>&nbsp;</P> <P>This new connect capability and direct integration in the Azure ML Studio creates a better-together experience between Azure ML and VS Code! When working on your machine learning projects you can get started with a notebook in the Azure ML Studio for early data prep and exploratory work, when you’re ready to start fleshing out the rest of your project, work on multiple file types, and use more advanced editing capabilities and VS Code extension, you can seamlessly transition over to working in VS Code. The retained context and file share usage enables you to move bi-directionally (from notebooks to VS Code and vice-versa) without requiring additional work.</P> <H2>&nbsp;</H2> <H2>Getting Started</H2> <P>You can initiate the connection to VS Code directly from the Studio UI through either the Compute Instance or Notebook pages. Alternatively, there are routes starting directly within VS Code if you would prefer. Given you have the <A href="#" target="_blank" rel="noopener">Azure Machine Learning extension</A> installed, you can find the compute instance in the tree view and right-click on it to connect. You can also invoke the command “Azure ML: Connect to compute instance” and follow the prompts to initiate the connection.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ci-command.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270910iB2F852D3AA9A8056/image-size/large?v=v2&amp;px=999" role="button" title="ci-command.png" alt="Azure ML extension command" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Azure ML extension command</span></span></P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ci-context-menu.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/270911i0915A8CE80FADF31/image-size/large?v=v2&amp;px=999" role="button" title="ci-context-menu.png" alt="Azure ML extension tree view context menu option" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Azure ML extension tree view context menu option</span></span></P> <P>&nbsp;</P> <P>For more details on how you can get started with this experience, please take a look at our <A href="#" target="_blank" rel="noopener">public documentation</A>.</P> <P>&nbsp;</P> <P>Both the Azure ML and VS Code extension teams are always looking for feedback on our current experiences and what we should work on next. If there is anything you would like us to prioritize, please feel free to suggest so via our <A href="#" target="_blank" rel="noopener">GitHub repo</A>; if you would like to provide more general feedback, please <A href="#" target="_blank" rel="noopener">fill out our survey</A>.</P> Thu, 08 Apr 2021 15:25:26 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/supercharge-azure-ml-code-development-with-new-vs-code/ba-p/2260129 Sid_Unnithan 2021-04-08T15:25:26Z Eleven more languages are generally available for Azure Neural Text-to-Speech https://gorovian.000webhostapp.com/?exam=t5/azure-ai/eleven-more-languages-are-generally-available-for-azure-neural/ba-p/2236871 <P><EM>This post is co-authored with Lihui Wang, Gang Wang, Xinfeng Chen, Qinying Liao, Garfield He and Sheng Zhao</EM></P> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Neural Text-to-Speech</A> (Neural TTS), part of Speech in Azure Cognitive Services, enables you to convert text to lifelike speech for more natural user interactions. Neural TTS has powered a wide range of scenarios, from audio content creation to natural-sounding voice assistants, for customers from all over the world. Today we are happy to announce that 6 new languages were added to the Neural TTS portfolio with 12 voices available, and the 10 voices in preview with 5 languages are now generally available.&nbsp;&nbsp;</P> <P>&nbsp;</P> <H2>Six new languages</H2> <P>&nbsp;</P> <P>12 voices from 6 brand-new languages, with one male and one female voice in each language are available now: Nia in ‘cy-GB’ Welsh (United Kingdom), Aled in ‘cy-GB’ Welsh (United Kingdom), Rosa in ‘en-PH’ English (Philippines), James in ‘en-PH’ English (Philippines), Charline in ‘fr-BE’ French (Belgium), Gerard in ‘fr-BE’ French (Belgium), Dena in ‘’nl-BE Dutch (Belgium), Arnaud in ‘nl-BE’ Dutch (Belgium), Polina in ‘uk-UA’ Ukranian (Ukraine), Ostap in ‘uk-UA’ Ukranian (Ukraine), Uzma in ‘ur-PK’ Urdu (Pakistan), and Asad in ‘ur-PK’ Urdu (Pakistan).</P> <P>&nbsp;</P> <P>Hear the samples below or try them with your own text in our&nbsp;<A href="#" target="_blank" rel="noopener">product demo on Azure</A>.&nbsp;</P> <P>&nbsp;</P> <TABLE width="623"> <TBODY> <TR> <TD width="59px"> <P><STRONG>Locale code</STRONG></P> </TD> <TD width="98px"> <P><STRONG>Language</STRONG></P> </TD> <TD width="66px"> <P><STRONG>Gender</STRONG></P> </TD> <TD width="116px"> <P><STRONG>Voice name</STRONG></P> </TD> <TD width="283px"> <P><STRONG>Audio sample</STRONG></P> </TD> </TR> <TR> <TD width="59px"> <P>cy-GB</P> </TD> <TD width="98px"> <P>Welsh (UK)</P> </TD> <TD width="66px"> <P>Female</P> </TD> <TD width="116px"> <P>cy-GB-NiaNeural</P> </TD> <TD width="283px"> <P>Mae'r ysgol ar agor drwy'r wythnos.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210223-0759551088/TTS-NiaNeural-Waves-Shortsentence-00002.wav?sr=c&amp;si=ReadPolicy&amp;sig=b3aatrBz8UIddVDkuFSOc9N2KlGs2dtcIVHxd5HwShU%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>cy-GB</P> </TD> <TD width="98px"> <P>Welsh (UK)</P> </TD> <TD width="66px"> <P>Male</P> </TD> <TD width="116px"> <P>cy-GB-AledNeural</P> </TD> <TD width="283px"> <P>Mae Bangor 8 milltir o Gaernarfon.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210222-0949572958/TTS-AledNeural-Waves-GeneralSentence-00009.wav?sr=c&amp;si=ReadPolicy&amp;sig=REoamfTScigj6NINsMxw6XxclSTCD5CyTNJ14CUVvrA%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>en-PH</P> </TD> <TD width="98px"> <P>English (Philippines)</P> </TD> <TD width="66px"> <P>Female</P> </TD> <TD width="116px"> <P>en-PH-RosaNeural</P> </TD> <TD width="283px"> <P>I need to buy a mineral water.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://ttsus.blob.core.windows.net/default-testdata-78872-210223-1015010108/TTS-RosaNeural-Waves-GeneralSentence-00058.wav?sr=c&amp;si=ReadPolicy&amp;sig=nalnHnLzKCpXrVqEcGz6RBuG1BTwEbyfhk0iRjXEUz4%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>en-PH</P> </TD> <TD width="98px"> <P>English (Philippines)</P> </TD> <TD width="66px"> <P>Male</P> </TD> <TD width="116px"> <P>en-PH-JamesNeural</P> </TD> <TD width="283px"> <P>Let's meet tomorrow at 6 pm.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://ttsus.blob.core.windows.net/default-testdata-78872-210223-1019419930/TTS-JamesNeural-Waves-GeneralSentence-00031.wav?sr=c&amp;si=ReadPolicy&amp;sig=yrVpXhdhhk25%2FjYhZCJc45aKfrwp1C%2FY8QdHUyhILWU%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>fr-BE</P> </TD> <TD width="98px"> <P>French (Belgium)</P> </TD> <TD width="66px"> <P>Female</P> </TD> <TD width="116px"> <P>fr-BE-CharlineNeural</P> </TD> <TD width="283px"> <P>On se voit pour dîner demain ?</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1008227048/TTS-CharlineNeural-Waves-GeneralSentence-00016.wav?sr=c&amp;si=ReadPolicy&amp;sig=nmDuOtQXSZQtgOuPxxuaVDRT4Ljct9CEg7Ee54OA8qE%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>fr-BE</P> </TD> <TD width="98px"> <P>French (Belgium)</P> </TD> <TD width="66px"> <P>Male</P> </TD> <TD width="116px"> <P>fr-BE-GerardNeural</P> </TD> <TD width="283px"> <P>Il existe 2 manières de participer.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1018241597/TTS-GerardNeural-Waves-GeneralSentence-00036.wav?sr=c&amp;si=ReadPolicy&amp;sig=T698dE7j4VlnIzFh%2Fxu%2BMMPjkOjAG6a5yCuSrT4Mtcs%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>nl-BE</P> </TD> <TD width="98px"> <P>Dutch (Belgium)</P> </TD> <TD width="66px"> <P>Female</P> </TD> <TD width="116px"> <P>nl-BE-DenaNeural</P> </TD> <TD width="283px"> <P>Hij is al urenlang online.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1041306573/TTS-DenaNeural-Waves-GeneralSentence-00008.wav?sr=c&amp;si=ReadPolicy&amp;sig=43Wt1OVaATmHPCAhdBOsuJebK01KUV959Bfg%2Ft0giL8%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>nl-BE</P> </TD> <TD width="98px"> <P>Dutch (Belgium)</P> </TD> <TD width="66px"> <P>Male</P> </TD> <TD width="116px"> <P>nl-BE-ArnaudNeural</P> </TD> <TD width="283px"> <P>Ik vond vele kabouters in hun tuin.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://ttseur.blob.core.windows.net/default-testdata-78872-210205-1048103107/TTS-ArnaudNeural-Waves-GeneralSentence-00038.wav?sr=c&amp;si=ReadPolicy&amp;sig=HmbZ58lyEUc57Tq6vwNOptr4avEoTc5d3HdLxt20ZuE%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>uk-UA</P> </TD> <TD width="98px"> <P>Ukrainian (Ukraine)</P> </TD> <TD width="66px"> <P>Female</P> </TD> <TD width="116px"> <P>uk-UA-PolinaNeural</P> </TD> <TD width="283px"> <P>У Києві завершили реставрацію Андріївської церкви.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0931272540/TTS-PolinaNeural-Waves-GeneralSentence-00042.wav?sr=c&amp;si=ReadPolicy&amp;sig=cqZZm%2BwrPWhCXjrDS5UJQFP%2FTHGfDoFesOHVEhxdXhQ%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>uk-UA</P> </TD> <TD width="98px"> <P>Ukrainian (Ukraine)</P> </TD> <TD width="66px"> <P>Male</P> </TD> <TD width="116px"> <P>uk-UA-OstapNeural</P> </TD> <TD width="283px"> <P>Загалом було оновлено 4 395 км доріг.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0936496995/TTS-OstapNeural-Waves-GeneralSentence-00012.wav?sr=c&amp;si=ReadPolicy&amp;sig=Kc4hCGYi9j9fX4rbq%2FLi9Q%2F0DOu637zzYBbreRXAdaI%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>ur-PK</P> </TD> <TD width="98px"> <P>Urdu (Pakistan)</P> </TD> <TD width="66px"> <P>Female</P> </TD> <TD width="116px"> <P>ur-PK-UzmaNeural</P> </TD> <TD width="283px"> <P class="lia-align-right">واہ! کیا ہی خوبصورت نظارہ ہے۔</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0948509228/TTS-UzmaNeural-Waves-GeneralSentence-00017.wav?sr=c&amp;si=ReadPolicy&amp;sig=FeW4%2FPk%2FUWHVPPV6dh6nTIze41cxNoUg3%2B7FgFmeE70%3D"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="59px"> <P>ur-PK</P> </TD> <TD width="98px"> <P>Urdu (Pakistan)</P> </TD> <TD width="66px"> <P>Male</P> </TD> <TD width="116px"> <P>ur-PK-AsadNeural</P> </TD> <TD width="283px"> <P class="lia-align-right">سورج گرہن پاکستانی وقت کے مطابق شام 6 بج کر 34 منٹ پر شروع ہو گا۔</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/default-testdata-78872-210205-0954494762/TTS-AsadNeural-Waves-GeneralSentence-00043.wav?sr=c&amp;si=ReadPolicy&amp;sig=0op69NuG02bH%2BgMOk7dCzCwW%2Fvl%2FJqyy4E29Aj73DoI%3D"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <P>With this update, Azure TTS now supports 60 languages in total. Check out the figure below for more details or see the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">full language list.</A></P> <P>&nbsp;&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="GarfieldHe_0-1616656804430.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/266926iBAE9E05A59FF3DB9/image-size/large?v=v2&amp;px=999" role="button" title="GarfieldHe_0-1616656804430.png" alt="GarfieldHe_0-1616656804430.png" /></span>&nbsp;&nbsp;</P> <P>&nbsp;</P> <H2>Five preview languages now GA</H2> <P>&nbsp;</P> <P>Last November, we released 5 languages in preview with 10 voices for <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/neural-text-to-speech-previews-five-new-languages-with/ba-p/1907604" target="_blank" rel="noopener">European locales</A>. Now these languages become generally available in all&nbsp;<A href="#" target="_blank" rel="noopener">Neural TTS regions/datacenters</A>. Azure TTS now has full support for all 24 European languages.</P> <P>&nbsp;</P> <TABLE width="623"> <TBODY> <TR> <TD width="39"> <P><STRONG>Locale code</STRONG></P> </TD> <TD width="62"> <P><STRONG>Language</STRONG></P> </TD> <TD width="55"> <P><STRONG>Gender</STRONG></P> </TD> <TD width="60"> <P><STRONG>Voice name</STRONG></P> </TD> <TD width="286"> <P><STRONG>Audio samples</STRONG></P> </TD> </TR> <TR> <TD width="39"> <P>et-EE</P> </TD> <TD width="62"> <P>Estonian (Estonia)</P> </TD> <TD width="55"> <P>Female</P> </TD> <TD width="60"> <P>et-EE-AnuNeural</P> </TD> <TD width="286"> <P>Pese voodipesu kord nädalas või vähemalt kord kahe nädala järel ning ära unusta pesta ka kardinaid.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/et-EE.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>et-EE</P> </TD> <TD width="62"> <P>Estonian (Estonia)</P> </TD> <TD width="55"> <P>Male</P> </TD> <TD width="60"> <P>et-EE- KertNeural</P> </TD> <TD width="286"> <P>Ametlikku meetodit sellise pettuse avastamiseks ei olegi olemas.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/et-EE%20Kert.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>ga-IE</P> </TD> <TD width="62"> <P>Irish (Ireland)</P> </TD> <TD width="55"> <P>Female</P> </TD> <TD width="60"> <P>ga-IE- OrlaNeural</P> </TD> <TD width="286"> <P>Tá an scoil sa mbaile ar oscailt arís inniu.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/ga-IE.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>ga-IE</P> </TD> <TD width="62"> <P>Irish (Ireland)</P> </TD> <TD width="55"> <P>Male</P> </TD> <TD width="60"> <P>ga-IE- ColmNeural</P> </TD> <TD width="286"> <P>Ritheadh próiseas comhairliúcháin faoin scéal sa bhfómhar.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/ga-IE%20Colm.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>lt-LT</P> </TD> <TD width="62"> <P>Lithuanian (Lithuania)</P> </TD> <TD width="55"> <P>Female</P> </TD> <TD width="60"> <P>lt-LT- OnaNeural</P> </TD> <TD width="286"> <P>Derinti motinystę ir kūrybą išmokau jau po pirmojo vaiko gimimo.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lt-LT.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>lt-LT</P> </TD> <TD width="62"> <P>Lithuanian (Lithuania)</P> </TD> <TD width="55"> <P>Male</P> </TD> <TD width="60"> <P>lt-LT- LeonasNeural</P> </TD> <TD width="286"> <P>Aišku, anksčiau ar vėliau paaiškės tos priežastys.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lt-LT%20Leonas.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>lv-LV</P> </TD> <TD width="62"> <P>Latvian (Latvia)</P> </TD> <TD width="55"> <P>Female</P> </TD> <TD width="60"> <P>lv-LV-EveritaNeural</P> </TD> <TD width="286"> <P>Daži tumšās šokolādes gabaliņi dienā ir gandrīz būtiska uztura sastāvdaļa.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/lv-LV.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>lv-LV</P> </TD> <TD width="62"> <P>Latvian (Latvia)</P> </TD> <TD width="55"> <P>Male</P> </TD> <TD width="60"> <P>lv-LV- NilsNeural</P> </TD> <TD width="286"> <P>Aizvadīto gadu uzņēmums noslēdzis ar 6,3 miljonu eiro zaudējumiem.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/lv-LV%20Nils.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>mt-MT</P> </TD> <TD width="62"> <P>Maltese (Malta)</P> </TD> <TD width="55"> <P>Female</P> </TD> <TD width="60"> <P>mt-MT-GraceNeural</P> </TD> <TD width="286"> <P>Fid-diskors tiegħu, is-Segretarju Parlamentari fakkar li dan il-Gvern daħħal numru ta’ liġijiet u inizjattivi li jħarsu lill-annimali.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="https://nerualttswaves.blob.core.windows.net/neuralttswaves/mt-MT.wav"></SOURCE></AUDIO></TD> </TR> <TR> <TD width="39"> <P>mt-MT</P> </TD> <TD width="62"> <P>Maltese (Malta)</P> </TD> <TD width="55"> <P>Male</P> </TD> <TD width="60"> <P>mt-MT- JosephNeural</P> </TD> <TD width="286"> <P>Anki tfajjel tal-primarja jaf li l-popolazzjoni tikber fejn hemm il-prosperità.</P> <AUDIO controls="controls" data-mce-fragment="1"> <SOURCE src="http://tts.blob.core.windows.net/garhe/Dec%20release%20EU24/mt-MT%20Joseph.wav"></SOURCE></AUDIO></TD> </TR> </TBODY> </TABLE> <P>&nbsp;</P> <H2>How to integrate with the new voices/languages</H2> <P>&nbsp;</P> <P>Azure TTS now covers more languages of the world. Applications using Azure TTS can be easily updated to support coverage of additional countries. All the voices are available in the <A href="#" target="_blank" rel="noopener">same API</A>&nbsp;and <A href="#" target="_blank" rel="noopener">SDK</A>. Developers can just edit the voice and locale list in their applications to use these new voices without code logic modifications.</P> <P>&nbsp;</P> <P>For instance, <A href="#" target="_blank" rel="noopener">Microsoft Teams auto attendants</A>&nbsp;lets people call your organization and navigate a menu system to speak to the right department, call queue, person, or an operator. It uses Azure TTS to render customized prompts as a call response. To better localize audio prompts for different countries, Teams has been integrated with the new TTS languages to serve more customers around the world.</P> <P>&nbsp;</P> <H2>Want more languages or voices?</H2> <P>&nbsp;</P> <P>If you find that the language which you are looking for is not supported by Azure TTS, reach out to your sales representative, or file a support ticket on Azure. We'd be happy to&nbsp;engage and discuss how to support the languages you need. You can also customize and create a brand voice with your speech data for your apps using the&nbsp;<A href="#" target="_blank" rel="noopener">Custom Neural Voice</A> feature.&nbsp;</P> <P>&nbsp;</P> <H2>Tell us your experiences!</H2> <P>&nbsp;</P> <P>By offering more voices across more languages and locales, we anticipate developers across the world will be able to build applications that change experiences for millions. Whether you are building a voice-enabled chatbot or IoT device, an IVR solution, adding read-aloud features to your app, converting e-books to audio books, or even adding Speech to a translation app, you can make all these experiences natural sounding and fun with Neural TTS.</P> <P>&nbsp;</P> <P>Let us know how you are using or plan to use Neural TTS voices in this <A href="#" target="_blank" rel="noopener">form</A>. If you prefer, you can also contact us at mstts [at] microsoft.com. We look forward to hearing about your experience and look forward to developing more compelling services together with you for the developers around the world.</P> <P>&nbsp;</P> <H2>Get started</H2> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Add voice to your app in 15 minutes</A></P> <P><A href="#" target="_blank" rel="noopener">Explore the available voices in this demo</A></P> <P><A href="#" target="_blank" rel="noopener">Build a voice-enabled bot</A></P> <P><A href="#" target="_blank" rel="noopener">Deploy Azure TTS voices on prem with Speech Containers</A></P> <P><A href="#" target="_blank" rel="noopener">Build your custom voice</A></P> <P>&nbsp;</P> Wed, 31 Mar 2021 15:21:04 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/eleven-more-languages-are-generally-available-for-azure-neural/ba-p/2236871 GarfieldHe 2021-03-31T15:21:04Z Azure Speech and Batch Ingestion https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-speech-and-batch-ingestion/ba-p/2222539 <H1>Getting started with Azure Speech and Batch Ingestion Client</H1> <P>&nbsp;</P> <P>Batch Ingestion Client is as a zero-touch transcription solution for all your audio files in your Azure Storage. If you are looking for a quick and effortless way to transcribe your audio files or even explore transcription, without writing any code, then this solution is for you. Through an ARM template deployment, all the resources necessary to seamlessly process your audio files are set-up and set in motion.</P> <P>&nbsp;</P> <H1>Why do I need this?</H1> <P>&nbsp;</P> <P>Getting started with any API requires some amount of time investment in learning the API, understanding its scope, and getting value through trial and error. In order to speed up your transcription solution, for those of you that do not have the time to invest in getting to know our API or related best practices, we created an ingestion layer (a client for batch transcription) that will help you set-up a full blown, scalable and secure transcription pipeline without writing any code.</P> <P>&nbsp;</P> <P>This is a smart client in the sense that it implements best practices and optimized against the capabilities of the Azure Speech infrastructure. It utilizes Azure resources such as Service Bus and Azure Functions to orchestrate transcription requests to Azure Speech Services from audio files landing in your dedicated storage containers.</P> <P>&nbsp;</P> <P>Before we delve deeper into the set-up instructions, let us have a look at the architecture of the solution this ARM template builds.</P> <DIV id="tinyMceEditorPanos Periorellis_0" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="architecture.png" style="width: 741px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265483iFB98720C64CE6685/image-size/large?v=v2&amp;px=999" role="button" title="architecture.png" alt="architecture.png" /></span></P> <P class="lia-align-center">&nbsp;</P> <P>&nbsp;</P> <P>The diagram is simple and hopefully self-explanatory. As soon as files land in a storage container, the Grid Event that indicates the complete upload of a file is filtered and pushed to a Service bus topic. Azure Functions (time triggered by default) pick up those events and act, namely creating Tx requests using the Azure Speech Services batch pipeline. When the Tx request is successfully carried out an event is placed in another queue in the same service bus resource. A different Azure Function triggered by the completion event starts monitoring transcription completion status and copies the actual transcripts in the containers from which the audio file was obtained. This is it. The rest of the features are applied on demand. Users can choose to apply analytics on the transcript, produce reports or redact, all of which are the result of additional resources being deployed through the ARM template. The solution will start transcribing audio files without the need to write any code. If -however- you want to customize further this is possible too. The code is available in this <A href="#" target="_self">repo</A>.</P> <P>&nbsp;</P> <P>The list of best practices we implemented as part of the solution are:</P> <OL> <LI>Optimized the number of audio files included in each transcription with the view of achieving the shortest possible SAS TTL.</LI> <LI>Round Robin around selected regions in order to distribute load across available regions (per customer request)</LI> <LI>Retry logic optimization to handle smooth scaling up and transient HTTP 429 errors</LI> <LI>Running Azure Functions economically, ensuring minimal execution cost</LI> </OL> <H2>Setup Guide</H2> <P>&nbsp;</P> <P>The following guide will help you create a set of resources on Azure that will manage the transcription of audio files.</P> <H2>Prerequisites</H2> <P>&nbsp;</P> <P>An&nbsp;<A href="#" target="_blank" rel="noopener">Azure Account</A>&nbsp;as well as an&nbsp;<A href="#" target="_blank" rel="noopener">Azure Speech key</A>&nbsp;is needed to run the Batch Ingestion Client.</P> <P>Here are the detailed steps to create a speech resource:</P> <P>&nbsp;</P> <P><EM><STRONG>NOTE:</STRONG></EM>&nbsp;You need to create a Speech Resource with a paid (S0) key. The free key account will not work. Optionally for analytics you can create a Text Analytics resource too.</P> <P>&nbsp;</P> <OL> <LI>Go to&nbsp;<A href="#" target="_blank" rel="noopener">Azure portal</A></LI> <LI>Click on +Create Resource</LI> <LI>Type Speech and</LI> <LI>Click Create on the Speech resource.</LI> <LI>You will find the subscription key under&nbsp;<STRONG>Keys</STRONG></LI> <LI>You will also need the region, so make a note of that too.</LI> </OL> <P>To test your account, we suggest you use&nbsp;<A href="#" target="_blank" rel="noopener">Microsoft Azure Storage Explorer</A>.</P> <H3>The Project</H3> <P>Although you do not need to download or do any changes to the code you can still download it from GitHub:</P> <P>&nbsp;</P> <PRE>git clone https://github.com/Azure-Samples/cognitive-services-speech-sdkcd cognitive-services-speech-sdk/samples/batch/transcription-enabled-storage</PRE> <P>&nbsp;</P> <P>Make sure that you have downloaded the&nbsp;<A href="#" target="_blank" rel="noopener">ARM Template</A>&nbsp;from the repository.</P> <H2>Batch Ingestion Client Setup Instructions</H2> <P>&nbsp;</P> <OL> <LI>Click on&nbsp;<STRONG>+Create Resource</STRONG>&nbsp;on&nbsp;<A href="#" target="_blank" rel="noopener">Azure portal</A>&nbsp;as shown in the following picture and type ‘&nbsp;<EM>template deployment</EM>&nbsp;’ on the search box.</LI> </OL> <DIV id="tinyMceEditorPanos Periorellis_1" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image001.png" style="width: 986px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265484i95D5BC1C83CDF228/image-size/large?v=v2&amp;px=999" role="button" title="image001.png" alt="image001.png" /></span></P> <P class="lia-align-center">&nbsp;</P> <P>&nbsp;</P> <P>&nbsp; &nbsp; &nbsp; 2. Click on&nbsp;<STRONG>Create</STRONG>&nbsp;Button on the screen that appears as shown below.</P> <DIV id="tinyMceEditorPanos Periorellis_2" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P>&nbsp; &nbsp; &nbsp;3. You will be creating the relevant Azure resources from the ARM template provided. Click on click on the ‘Build your own template in the editor’ link and wait for the new screen to load.</P> <DIV id="tinyMceEditorPanos Periorellis_3" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P>&nbsp;</P> <P>You will be loading the template via the&nbsp;<STRONG>Load file</STRONG>&nbsp;option. Alternatively, you could simply copy/paste the template in the editor.</P> <DIV id="tinyMceEditorPanos Periorellis_4" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P>Saving the template will result in the screen below. You will need to fill in the form provided. It is important that all the information is correct. Let us look at the form and go through each field.</P> <DIV id="tinyMceEditorPanos Periorellis_6" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image011.png" style="width: 640px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265489iC005F591513C0D18/image-size/large?v=v2&amp;px=999" role="button" title="image011.png" alt="image011.png" /></span></P> <P>&nbsp;</P> <P><EM><STRONG>NOTE:</STRONG></EM>&nbsp;Please use short descriptive names in the form for your resource group. Long resource group names may result in deployment error</P> <P>&nbsp;</P> <UL> <LI>First pick the Azure Subscription Id within which you will create the resources.</LI> <LI>Either pick or create a resource group. [It would be better to have all the resources within the same resource group so we suggest you create a new resource group].</LI> <LI>Pick a region [May be the same region as your Azure Speech key].</LI> </UL> <P>The following settings all relate to the resources and their attributes</P> <UL> <LI>Give your transcription enabled storage account a name [you will be using a new storage account rather than an existing one]. If you opt to use existing one then all existing audio files in that account will be transcribed too.</LI> </UL> <P>The following 2 steps are optional. Omitting them will result in using the base model to obtain transcripts. If you have created a Custom Speech model using <A href="#" target="_blank" rel="noopener">Speech Studio</A>, then:</P> <P>&nbsp;</P> <UL> <LI>Enter optionally your primary Acoustic model ID</LI> <LI>Enter optionally your primary Language model ID</LI> </UL> <P>If you want us to perform Language identification on the audio prior to transcription you can also specify a secondary locale. Our service will check if the language on the audio content is the primary or secondary locale and select the right model for transcription.</P> <P>Transcripts are obtained by polling the service. We acknowledge that there is a cost related to that. So, the following setting gives you the option to limit that cost by telling your Azure Function how often you want it to fire.</P> <UL> <LI>Enter the polling frequency [There are many scenarios where this would be required to be done couple of times a day]</LI> <LI>Enter locale of the audio [you need to tell us what language model we need to use to transcribe your audio.]</LI> <LI>Enter your Azure Speech subscription key and Locale information</LI> </UL> <LI-SPOILER><EM><STRONG>NOTE:</STRONG></EM>&nbsp;If you plan to transcribe large volume of audio (say millions of files) we propose that you rotate the traffic between regions. In the Azure Speech Subscription Key text box you can put as many keys separated by column ';'. In is important that the corresponding regions (Again separated by column ';') appear in the Locale information text box. For example if you have 3 keys (abc, xyz, 123) for east us, west us and central us respectively then lay them out as follows 'abc;xyz;123' followed by 'east us;west us;central us'</LI-SPOILER> <P>The rest of the settings related to the transcription request. You can read more about those in our&nbsp;<A href="#" target="_blank" rel="noopener">docs</A>.</P> <UL> <LI>Select a profanity option</LI> <LI>Select a punctuation option</LI> <LI>Select to Add Diarization [all locales]</LI> <LI>Select to Add Word level Timestamps [all locales]</LI> </UL> <P>Do you need more than transcription? Do you need to apply Sentiment to your transcript? Downstream analytics are possible too, with Text Analytics Sentiment and Redaction being offered as part of this solution too.</P> <P>&nbsp;</P> <P>If you want to perform Text Analytics please add those credentials.</P> <UL> <LI>Add Text analytics key</LI> <LI>Add Text analytics region</LI> <LI>Add Sentiment</LI> <LI>Add data redaction</LI> </UL> <P>If you want to further analytics we could map the transcript json we produce to a DB schema.</P> <UL> <LI>Enter SQL DB credential login</LI> <LI>Enter SQL DB credential password</LI> </UL> <P>You can feed that data to your custom PowerBI script or take the scripts included in this repository. Follow this <A href="#" target="_self">guide</A> for setting it up.</P> <P>&nbsp;</P> <P>Press&nbsp;<STRONG>Create</STRONG>&nbsp;to trigger the resource creating process. It typically takes 1-2 mins. The set of resources are listed below.</P> <DIV id="tinyMceEditorPanos Periorellis_7" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P>If a Consumption Plan (Y1) was selected for the Azure Functions, make sure that the functions are synced with the other resources (see&nbsp;<A href="#" target="_blank" rel="noopener">this</A>&nbsp;for further details).</P> <P>&nbsp;</P> <P>To do so, click on your StartTranscription function in the portal and wait until your function shows up:</P> <DIV id="tinyMceEditorPanos Periorellis_8" class="mceNonEditable lia-copypaste-placeholder">&nbsp;</DIV> <P>Do the same for the FetchTranscription function</P> <P>&nbsp;</P> <LI-SPOILER><EM><STRONG>Important:</STRONG></EM>&nbsp;Until you restart both Azure functions you may see errors.</LI-SPOILER> <H2>&nbsp;</H2> <H2>Running the Batch Ingestion Client</H2> <P>&nbsp;</P> <P>Upload audio files to the newly created audio-input container (results are added to json-result-output and test-results-output containers). Once they are done you can test your account.</P> <P>&nbsp;</P> <P>Use&nbsp;<A href="#" target="_blank" rel="noopener">Microsoft Azure Storage Explorer</A>&nbsp;to test uploading files to your new account. The process of transcription is asynchronous. Transcription usually takes half the time of the audio track to be obtained. The structure of your newly created storage account will look like the picture below.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="image015.png" style="width: 297px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265492i29661E0FD6C8203E/image-size/large?v=v2&amp;px=999" role="button" title="image015.png" alt="image015.png" /></span></P> <P>&nbsp;</P> <P>There are several containers to distinguish between the various outputs. We suggest (for the sake of keeping things tidy) to follow the pattern and use the audio-input container as the only container for uploading your audio.</P> <H2>&nbsp;</H2> <H2>Customizing the Batch Ingestion Client</H2> <P>&nbsp;</P> <P>By default, the ARM template uses the newest version of the Batch Ingestion Client which can be found in this repository. If you want to customize this further clone the <A href="#" target="_self">repo</A></P> <P>&nbsp;</P> <P>To publish a new version, you can use Visual Studio, right click on the respective project, click publish and follow the instructions.</P> <H2>&nbsp;</H2> <H2><SPAN>What to build next</SPAN></H2> <P>&nbsp;</P> <P><SPAN>Now that you’ve successfully implemented a speech to text scenario, you can build on this scenario. Take a look at the insights&nbsp;<A href="#" target="_blank" rel="noopener">Text Analytics</A> provides from the transcript like caller and agent sentiment, key phrase extraction and entity recognition.&nbsp; If you’re looking specifically to solve for Call centre&nbsp;transcription, review <A href="#" target="_blank" rel="noopener">this docs page</A> for further guidance</SPAN></P> <P>&nbsp;</P> Tue, 30 Mar 2021 20:21:49 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/azure-speech-and-batch-ingestion/ba-p/2222539 Panos Periorellis 2021-03-30T20:21:49Z Microsoft named a Leader in 2021 Gartner Magic Quadrant for Cloud AI Developer Services https://gorovian.000webhostapp.com/?exam=t5/azure-ai/microsoft-named-a-leader-in-2021-gartner-magic-quadrant-for/ba-p/2223100 <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Gartner CAIDS MQ graphic 2021.png" style="width: 957px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/265536i164046209606D6C1/image-size/large?v=v2&amp;px=999" role="button" title="Gartner CAIDS MQ graphic 2021.png" alt="Gartner CAIDS MQ graphic 2021.png" /></span></P> <P>Gartner recently released their Magic Quadrant for 2021 Cloud AI Developer Services. Microsoft is in the Leaders quadrant and was positioned highest on the ability to execute axis. You can download a complimentary copy of the <A href="#" target="_blank" rel="noopener">Magic Quadrant for Cloud AI Developer Services</A> for the full report. In this post, we’ll look at why, we think, Microsoft was placed in the Leaders quadrant.&nbsp;</P> <P>&nbsp;</P> <P>According to the report, “Gartner defines cloud AI developer services (CAIDS) as cloud-hosted or containerized services/models that allow development teams and business users to leverage artificial intelligence models via APIs, SDKs, or applications without requiring deep data science expertise.”</P> <P>&nbsp;</P> <P>They specifically evaluated services with capabilities in language, vision, and automated machine learning. For Azure, this includes Azure Cognitive Services, Azure Machine Learning, and Microsoft’s conversational AI portfolio. For Power Platform, this includes AI Builder and Power Virtual Agents.</P> <P>&nbsp;</P> <P>“Gartner believes that enterprise development teams will increasingly incorporate models built using AI and ML into applications. These services currently fall into three main functional areas: language, vision and automated machine learning (autoML). The language services include natural language understanding (NLU), conversational agent frameworks, text analytics, sentiment analysis and other capabilities. The vision services include image recognition, video content analysis and optical character recognition (OCR). The autoML services include automated tools that will let developers do data preparation, feature engineering, create models, deploy, monitor and manage models without having to learn data science.”</P> <P>&nbsp;</P> <P>Azure AI enables you to develop AI applications on your terms, apply AI responsibly, and deploy mission-critical AI solutions.</P> <P>&nbsp;</P> <H2>Develop on your terms</H2> <P>&nbsp;</P> <P>Azure AI allows you to build AI applications in your preferred software development language and deploy in the cloud, on-premises, or at the edge. Azure provides options for data scientists and developers of all skill levels – no machine learning expertise required. See the Microsoft section of the <A href="#" target="_blank" rel="noopener">Magic Quadrant for Cloud AI Developer Services</A>.</P> <P>&nbsp;</P> <H2>Apply AI responsibly</H2> <P>&nbsp;</P> <P>Azure offers tools and resources to help you understand, protect, and control your AI solutions, including responsible ML toolkits, responsible bot development guidelines, tools to help you explain model behavior and test for fairness, and more. We never use your data to train our models, and we keep principles like inclusiveness, fairness, transparency, and accountability in mind at every stage of our AI research, development, and deployment. See the Microsoft section of the <A href="#" target="_blank" rel="noopener">Magic Quadrant for Cloud AI Developer Services.</A></P> <P>&nbsp;</P> <H2>Deploy mission-critical solutions</H2> <P>&nbsp;</P> <P>Azure lets you access the same AI services that power products like Microsoft Teams and Xbox, and that are proven at global scale. Azure leads the industry when it comes to security, and we have the most comprehensive compliance coverage of any cloud service provider. We continue to innovate and our Microsoft Research team has made significant breakthroughs, most recently reaching human parity with <A href="#" target="_blank" rel="noopener">image captioning</A>. See the Microsoft section of the <A href="#" target="_blank" rel="noopener">Magic Quadrant for Cloud AI Developer Services.</A></P> <P>&nbsp;</P> <P>Whether you’re a professional developer or data scientist, or just getting started, we hope that you can use Azure AI services to build impactful AI-powered applications that solve complex problems and enhance customer experience.</P> <P>&nbsp;</P> <P><EM>This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.</EM></P> Fri, 19 Mar 2021 16:17:37 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/microsoft-named-a-leader-in-2021-gartner-magic-quadrant-for/ba-p/2223100 maddybutzbach 2021-03-19T16:17:37Z The Tenets of Knowledge Management Adoption https://gorovian.000webhostapp.com/?exam=t5/azure-ai/the-tenets-of-knowledge-management-adoption/ba-p/2221091 <P><STRONG><SPAN data-contrast="none">Knowledge Management Systems and Adoption Key Tenets:</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="none">Sonia M. Ang – CSA</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">In today’s competitive business environment organizations need a clear roadmap that aligns with their training needs and focuses on both short-and long-term objectives.&nbsp; Knowledge Management is a tool that can be implemented to identify and appeal to the training needs of the modern employee.&nbsp; A successful Enterprise Learning system goes beyond the organizational level and allows employees access to information and knowledge, thus creating better alignment at the enterprise level.&nbsp;&nbsp; A well-designed Knowledge Management System can break down barriers by providing partners, clients, and customers with not only essential information and robust training, but also opportunities to promote and inform your organization’s products and services.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Employee retention and customer satisfaction are essential to any organizations long-term success.&nbsp;&nbsp; When considering your organization’s ROI, the benefits of Enterprise Learning are threefold:&nbsp; retention (customers and employees), satisfaction, and improved profitability.&nbsp; Enterprise Learning can be leveraged to provide better development and training opportunities thus promoting a feeling of empowerment amongst your team.&nbsp; Enterprise Learning promotes efficiencies in training, in turn your organization will recognize the cost-saving benefits due to lower employee turnover and customer churn.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="none">Enterprise Knowledge Management Adoption&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">1. </SPAN><STRONG><SPAN data-contrast="none">Management Sponsorship and a COE&nbsp;&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="none">&nbsp;</SPAN></STRONG><SPAN data-contrast="none">An executive sponsor provides that critical link between executive leadership and project management and helps support projects successfully to their completion at their expected performance.</SPAN><SPAN data-contrast="none">  </SPAN><SPAN data-contrast="none">Sponsorship of the Enterprise Knowledge Management Project will enhance your product base and create opportunities for your company in the rapidly advancing Knowledge Management sector.</SPAN><SPAN data-contrast="none">  </SPAN><SPAN data-contrast="none">Executive sponsorship will align with our company’s strategy to be experts in Knowledge Management.</SPAN><SPAN data-contrast="none">  </SPAN><SPAN data-contrast="none">Microsoft will be at the forefront of Knowledge Management as organizations rush to adopt more efficient and effective training strategies that better align to the modern worker's needs.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">2. </SPAN><STRONG><SPAN data-contrast="none">Beyond Training but Execution&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;134233279&quot;:true,&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Training programs at the company level are often too extensive in complexity and the amount of learning material can be downright overwhelming.&nbsp; Training professionals need to venture beyond traditional learning and utilize learning strategies that support the needs of today’s modern professionals.&nbsp; Microlearning, for example, provides a host of benefits to your organization in terms of increased learner participation, memorability of courses, and quick deployment with easy updates to your digital learning assets.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Knowledge Management allows for learning concepts to be extracted from larger training programs and utilized as checklists and instructional videos that are easily accessible at a moment’s notice.&nbsp; When learning material is successfully mined it allows the learning process to be refined, thus challenging concepts can be identified and made easier to process and understand.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="none">3. Collaborative and Social Learning</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">How does your organization support collaboration, problem-solving and the co-creation of knowledge?&nbsp; A successful Enterprise Learning Strategy will push an organization forward by improving collaboration via robust communities and meaningful discussion forums.&nbsp;&nbsp; Building professional learning communities on platforms like Slack and Microsoft TEAMS can help break the walls down in organizations where ideas and knowledge may be siloed.&nbsp; In addition to collaborative discussion platforms that give all community members a voice, the development of Expert Finders can be an important catalyst for creating a robust culture of collaboration.&nbsp; Hidden ideas and knowledge will organically emerge from people in your organization that may hold previously unearthed niche expertise.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">4. </SPAN><STRONG><SPAN data-contrast="none">Where is the Data?&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">Generating data for analysis is the foundation of a robust Enterprise Learning strategy.&nbsp; The key to success is to build data collection directly into technical systems supported by a centralized knowledge repository.&nbsp; Data collection in the&nbsp;</SPAN><SPAN data-contrast="none">education sphere has traditionally focused on summative assessments like exams that are intended to measure a learner’s mastery of objectives.&nbsp; Using the specifications in Enterprise Learning provides another set of metrics by allowing an organization to track formative assessments, such as data and social learning activity.&nbsp; For example, these metrics allows an organization to collect new data, adding another layer of data to your knowledge repository to support the creation of more meaningful formative and summative assessments.</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="none">5. Reusable Content and Reproducibility&nbsp;</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">How do organizations move past the traditional content development models where bulky training manuals were the norm?&nbsp; Learning data and insights help organizations to build learning solutions that are reusable and provide 24/7 access to learning.&nbsp; Additionally, a </SPAN><I><SPAN data-contrast="none">Headless CMS</SPAN></I><SPAN data-contrast="none"> allows your organization’s content creators to move away from the rigid templates that most traditional learning management systems utilize. This means content creators have more control over the quality of their content, and this streamlines the process of creating unique digital learning experiences for both your employees and customers.&nbsp; Perhaps your organization releases a short instructional video on&nbsp;</SPAN><SPAN data-contrast="none">company’s&nbsp;</SPAN><SPAN data-contrast="none">rules and regulations.&nbsp; This learning asset has value as both a stand-alone content object in a knowledge base as well as a learning module in a more extensive communications course.&nbsp; Content creators in your organization, ranging from instructional designers to marketing professionals, often create multiple versions of the same material.&nbsp; Creating content that is reusable and not redundant is more efficient as you are not reinventing the wheel.&nbsp; Additionally, you are not burdened with trying to maintain and keep up-to-date multiple versions of the same learning assets.&nbsp;</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P>&nbsp;</P> <P><STRONG><SPAN data-contrast="none">6. “Findability” of Learning Assets: Digitization and Technology</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">The successful implementation of KM tools can enhance the user experience through successful mining and classification of metadata.&nbsp; The power of KM is powerful in that it provides a taxonomy, ontology, and a finely tuned search system. A well-designed metadata strategy will take advantage of your learning assets which may include courses, webinars, professional learning communities (PLCs) and subject matter experts in your organization.&nbsp; This myriad of data exits across multiple systems in your organization. Using metadata that is contextualized and consistent will ensure that your data is findable.&nbsp; Taking it one-step further ontologies can be tapped to support a complete network of shareable and reusable knowledge across a domain for each unique user.</SPAN><SPAN data-contrast="none"> </SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:240,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><STRONG><SPAN data-contrast="auto">Summary</SPAN></STRONG><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559738&quot;:312,&quot;335559739&quot;:240,&quot;335559740&quot;:240}">&nbsp;</SPAN></P> <P><SPAN data-contrast="none">The adoption of an Enterprise Management system can reframe your organization’s knowledge and learning infrastructure.&nbsp; Modern learners need to consume information quickly thus allowing them to efficiently apply and master essential skills and strategies.&nbsp; A well-engineered Enterprise Learning Plan that focuses on empowering your community will result in higher learner engagement, enhanced workplace skills and a robust culture-of-knowledge across your organization.&nbsp; Curiosity will drive success rather than traditional command and control training strategies.&nbsp; Empower your community by giving them the autonomy to self-direct their own learning.&nbsp; As a result, your organization will enjoy increased engagement, enhanced workforce skills and, in turn, a robust learning culture will grow and&nbsp;</SPAN><SPAN data-contrast="none">flourish</SPAN><SPAN data-ccp-props="{&quot;201341983&quot;:0,&quot;335559739&quot;:160,&quot;335559740&quot;:259}">&nbsp;</SPAN></P> Fri, 19 Mar 2021 16:00:00 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/the-tenets-of-knowledge-management-adoption/ba-p/2221091 Sonia Ang 2021-03-19T16:00:00Z Extract Data from PDFs using Form Recognizer with Code or Without! https://gorovian.000webhostapp.com/?exam=t5/azure-ai/extract-data-from-pdfs-using-form-recognizer-with-code-or/ba-p/2214299 <P>Form Recognizer is a powerful tool to help build a variety of document machine learning solutions. It is one service however its made up of many prebuilt models that can perform a variety of essential document functions. You can even custom train a model using supervised or unsupervised learning for tasks outside of the scope of the prebuilt models! Read more about all the features of Form Recognizer<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">here</A>. In this example we will be looking at how to use one of the prebuilt models in the Form Recognizer service that can extract the data from a PDF document dataset. Our documents are invoices with common data fields so we are able to use the prebuilt model without having to build a customized model.</P> <P>&nbsp;</P> <P>Sample Invoice:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="invoice.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264367i59D7C6180F17623E/image-size/large?v=v2&amp;px=999" role="button" title="invoice.png" alt="invoice.png" /></span></P> <P>&nbsp;</P> <P>After we take a look at how to do this with Python and Azure Form Recognizer, we will take a look at how to do the same process with no code using the Power Platform services: Power Automate and Form Recognizer built into AI Builder. In the Power Automate flow we are scheduling a process to happen every day. What the process does is look in the<SPAN>&nbsp;</SPAN><CODE>raw</CODE><SPAN>&nbsp;</SPAN>blob container to see if there is new files to be processed. If there is new files to be processed it gets all blobs from the container and loops through each blob to extract the PDF data using a prebuilt AI builder step. Then it deletes the processed document from the<SPAN>&nbsp;</SPAN><CODE>raw</CODE><SPAN>&nbsp;</SPAN>container. See what it looks like below.</P> <P>&nbsp;</P> <P>Power Automate Flow:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="flowaibuild.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264369i241F53F6E21A228F/image-size/large?v=v2&amp;px=999" role="button" title="flowaibuild.png" alt="flowaibuild.png" /></span></P> <H3>&nbsp;</H3> <H3><FONT size="5">Prerequisites for Python</FONT></H3> <UL> <LI>Azure Account<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Sign up here!</A></LI> <LI><A href="#" target="_blank" rel="nofollow noopener">Anaconda</A><SPAN>&nbsp;</SPAN>and/or<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">VS Code</A></LI> <LI>Basic programming knowledge</LI> </UL> <H3><A id="user-content-prerequisites-for-power-automate" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A><FONT size="5">Prerequisites for Power Automate</FONT></H3> <UL> <LI>Power Automate Account<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Sign up here!</A></LI> <LI>No programming knowledge</LI> </UL> <H2>&nbsp;</H2> <H2><FONT size="5">Process PDFs with Python and Azure Form Recognizer Service</FONT></H2> <H3>&nbsp;</H3> <H3><A id="user-content-create-services" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A>Create Services</H3> <P>&nbsp;</P> <P>First lets create the Form Recognizer Cognitive Service.</P> <UL> <LI>Go to <A href="#" target="_blank" rel="noopener">portal.azure.com</A> to create the resource or click this<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">link</A>.</LI> </UL> <P>Now lets create a storage account to store the PDF dataset we will be using in containers. We want two containers, one for the<SPAN>&nbsp;</SPAN><CODE>processed</CODE><SPAN>&nbsp;</SPAN>PDFs and one for the<SPAN>&nbsp;</SPAN><CODE>raw</CODE><SPAN>&nbsp;</SPAN>unprocessed PDF.</P> <UL> <LI>Create an<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Azure Storage Account</A></LI> <LI>Create two containers:<SPAN>&nbsp;</SPAN><CODE>processed</CODE>,<SPAN>&nbsp;</SPAN><CODE>raw</CODE></LI> </UL> <H3>&nbsp;</H3> <H3><A id="user-content-upload-data" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A>Upload data</H3> <P>&nbsp;</P> <P>Upload your dataset to the Azure Storage<SPAN>&nbsp;</SPAN><CODE>raw</CODE><SPAN>&nbsp;</SPAN>folder since they need to be processed. Once processed then they would get moved to the<SPAN>&nbsp;</SPAN><CODE>processed</CODE><SPAN>&nbsp;</SPAN>container.</P> <P>&nbsp;</P> <P>The result should look something like this:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="storageaccounts.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264370i359B312F0A1B3D34/image-size/large?v=v2&amp;px=999" role="button" title="storageaccounts.png" alt="storageaccounts.png" /></span></P> <P>&nbsp;</P> <H3>&nbsp;</H3> <H3>Create Notebook and Install Packages</H3> <P>&nbsp;</P> <P>Now that we have our data stored in Azure Blob Storage we can connect and process the PDF forms to extract the data using the Form Recognizer Python SDK. You can also use the Python SDK with local data if you are not using Azure Storage. This example will assume you are using Azure Storage.</P> <UL> <LI> <P>Create a new<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Jupyter notebook in VS Code</A>.</P> </LI> <LI> <P>Install the Python SDK</P> </LI> </UL> <DIV class="highlight highlight-source-python"> <PRE>!p<SPAN class="pl-s1">ip</SPAN> <SPAN class="pl-s1">install</SPAN> <SPAN class="pl-s1">azure</SPAN><SPAN class="pl-c1">-</SPAN><SPAN class="pl-s1">ai</SPAN><SPAN class="pl-c1">-</SPAN><SPAN class="pl-s1">formrecognizer</SPAN> <SPAN class="pl-c1">-</SPAN><SPAN class="pl-c1">-</SPAN><SPAN class="pl-s1">pre</SPAN></PRE> </DIV> <UL> <LI>Then we need to import the packages.</LI> </UL> <DIV class="highlight highlight-source-python"> <PRE><SPAN class="pl-k">import</SPAN> <SPAN class="pl-s1">os</SPAN> <SPAN class="pl-k">from</SPAN> <SPAN class="pl-s1">azure</SPAN>.<SPAN class="pl-s1">core</SPAN>.<SPAN class="pl-s1">exceptions</SPAN> <SPAN class="pl-k">import</SPAN> <SPAN class="pl-v">ResourceNotFoundError</SPAN> <SPAN class="pl-k">from</SPAN> <SPAN class="pl-s1">azure</SPAN>.<SPAN class="pl-s1">ai</SPAN>.<SPAN class="pl-s1">formrecognizer</SPAN> <SPAN class="pl-k">import</SPAN> <SPAN class="pl-v">FormRecognizerClient</SPAN> <SPAN class="pl-k">from</SPAN> <SPAN class="pl-s1">azure</SPAN>.<SPAN class="pl-s1">core</SPAN>.<SPAN class="pl-s1">credentials</SPAN> <SPAN class="pl-k">import</SPAN> <SPAN class="pl-v">AzureKeyCredential</SPAN> <SPAN class="pl-k">import</SPAN> <SPAN class="pl-s1">os</SPAN>, <SPAN class="pl-s1">uuid</SPAN> <SPAN class="pl-k">from</SPAN> <SPAN class="pl-s1">azure</SPAN>.<SPAN class="pl-s1">storage</SPAN>.<SPAN class="pl-s1">blob</SPAN> <SPAN class="pl-k">import</SPAN> <SPAN class="pl-v">BlobServiceClient</SPAN>, <SPAN class="pl-v">BlobClient</SPAN>, <SPAN class="pl-v">ContainerClient</SPAN>, <SPAN class="pl-s1">__version__</SPAN></PRE> </DIV> <H3>&nbsp;</H3> <H3><A id="user-content-create-formrecognizerclient" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A>Create FormRecognizerClient</H3> <P>&nbsp;</P> <UL> <LI>Update the<SPAN>&nbsp;</SPAN><CODE>endpoint</CODE><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><CODE>key</CODE><SPAN>&nbsp;</SPAN>with the values from the service you created. These values can be found in the Azure Portal under the Form Recongizer service you created under the<SPAN>&nbsp;</SPAN><CODE>Keys and Endpoint</CODE><SPAN>&nbsp;</SPAN>on the navigation menu.</LI> </UL> <DIV class="highlight highlight-source-python"> <PRE><SPAN class="pl-s1">endpoint</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s">"&lt;your endpoint&gt;"</SPAN> <SPAN class="pl-s1">key</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s">"&lt;your key&gt;"</SPAN></PRE> </DIV> <UL> <LI>We then use the<SPAN>&nbsp;</SPAN><CODE>endpoint</CODE><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><CODE>key</CODE><SPAN>&nbsp;</SPAN>to connect to the service and create the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">FormRecongizerClient</A></LI> </UL> <DIV class="highlight highlight-source-python"> <PRE><SPAN class="pl-s1">form_recognizer_client</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-v">FormRecognizerClient</SPAN>(<SPAN class="pl-s1">endpoint</SPAN>, <SPAN class="pl-v">AzureKeyCredential</SPAN>(<SPAN class="pl-s1">key</SPAN>))</PRE> </DIV> <UL> <LI>Create the<SPAN>&nbsp;</SPAN><CODE>print_results</CODE><SPAN>&nbsp;</SPAN>helper function for use later to print out the results of each invoice.</LI> </UL> <DIV class="highlight highlight-source-python"> <PRE><SPAN class="pl-k">def</SPAN> <SPAN class="pl-en">print_result</SPAN>(<SPAN class="pl-s1">invoices</SPAN>, <SPAN class="pl-s1">blob_name</SPAN>): <SPAN class="pl-k">for</SPAN> <SPAN class="pl-s1">idx</SPAN>, <SPAN class="pl-s1">invoice</SPAN> <SPAN class="pl-c1">in</SPAN> <SPAN class="pl-en">enumerate</SPAN>(<SPAN class="pl-s1">invoices</SPAN>): <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"--------Recognizing invoice {}--------"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">blob_name</SPAN>)) <SPAN class="pl-s1">vendor_name</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"VendorName"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">vendor_name</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Vendor Name: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">vendor_name</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">vendor_name</SPAN>.<SPAN class="pl-s1">confidence</SPAN>)) <SPAN class="pl-s1">vendor_address</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"VendorAddress"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">vendor_address</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Vendor Address: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">vendor_address</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">vendor_address</SPAN>.<SPAN class="pl-s1">confidence</SPAN>)) <SPAN class="pl-s1">customer_name</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"CustomerName"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">customer_name</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Customer Name: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">customer_name</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">customer_name</SPAN>.<SPAN class="pl-s1">confidence</SPAN>)) <SPAN class="pl-s1">customer_address</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"CustomerAddress"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">customer_address</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Customer Address: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">customer_address</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">customer_address</SPAN>.<SPAN class="pl-s1">confidence</SPAN>)) <SPAN class="pl-s1">customer_address_recipient</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"CustomerAddressRecipient"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">customer_address_recipient</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Customer Address Recipient: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">customer_address_recipient</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">customer_address_recipient</SPAN>.<SPAN class="pl-s1">confidence</SPAN>)) <SPAN class="pl-s1">invoice_id</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"InvoiceId"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">invoice_id</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Invoice Id: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">invoice_id</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">invoice_id</SPAN>.<SPAN class="pl-s1">confidence</SPAN>)) <SPAN class="pl-s1">invoice_date</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"InvoiceDate"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">invoice_date</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Invoice Date: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">invoice_date</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">invoice_date</SPAN>.<SPAN class="pl-s1">confidence</SPAN>)) <SPAN class="pl-s1">invoice_total</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"InvoiceTotal"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">invoice_total</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Invoice Total: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">invoice_total</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">invoice_total</SPAN>.<SPAN class="pl-s1">confidence</SPAN>)) <SPAN class="pl-s1">due_date</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">invoice</SPAN>.<SPAN class="pl-s1">fields</SPAN>.<SPAN class="pl-en">get</SPAN>(<SPAN class="pl-s">"DueDate"</SPAN>) <SPAN class="pl-k">if</SPAN> <SPAN class="pl-s1">due_date</SPAN>: <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"Due Date: {} has confidence: {}"</SPAN>.<SPAN class="pl-en">format</SPAN>(<SPAN class="pl-s1">due_date</SPAN>.<SPAN class="pl-s1">value</SPAN>, <SPAN class="pl-s1">due_date</SPAN>.<SPAN class="pl-s1">confidence</SPAN>))</PRE> </DIV> <H3>&nbsp;</H3> <H3><A id="user-content-connect-to-blob-storage" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A>Connect to Blob Storage</H3> <P>&nbsp;</P> <UL> <LI>Now lets<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">connect to our blob storage containers</A><SPAN>&nbsp;</SPAN>and create the<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">BlobServiceClient</A>. We will use the client to connect to the<SPAN>&nbsp;</SPAN><CODE>raw</CODE><SPAN>&nbsp;</SPAN>and<SPAN>&nbsp;</SPAN><CODE>processed</CODE><SPAN>&nbsp;</SPAN>containers that we created earlier.</LI> </UL> <DIV class="highlight highlight-source-python"> <PRE><SPAN class="pl-c"># Create the BlobServiceClient object which will be used to get the container_client</SPAN> <SPAN class="pl-s1">connect_str</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s">"&lt;Get connection string from the Azure Portal&gt;"</SPAN> <SPAN class="pl-s1">blob_service_client</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-v">BlobServiceClient</SPAN>.<SPAN class="pl-en">from_connection_string</SPAN>(<SPAN class="pl-s1">connect_str</SPAN>) <SPAN class="pl-c"># Container client for raw container.</SPAN> <SPAN class="pl-s1">raw_container_client</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">blob_service_client</SPAN>.<SPAN class="pl-en">get_container_client</SPAN>(<SPAN class="pl-s">"raw"</SPAN>) <SPAN class="pl-c"># Container client for processed container</SPAN> <SPAN class="pl-s1">processed_container_client</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">blob_service_client</SPAN>.<SPAN class="pl-en">get_container_client</SPAN>(<SPAN class="pl-s">"processed"</SPAN>) <SPAN class="pl-c"># Get base url for container.</SPAN> <SPAN class="pl-s1">invoiceUrlBase</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">raw_container_client</SPAN>.<SPAN class="pl-s1">primary_endpoint</SPAN> <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s1">invoiceUrlBase</SPAN>)</PRE> </DIV> <P><EM>HINT: If you get a "HttpResponseError: (InvalidImageURL) Image URL is badly formatted." error make sure the proper permissions to access the container are set. Learn more about Azure Storage Permissions<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">here</A></EM></P> <H3>&nbsp;</H3> <H3><A id="user-content-extract-data-from-pdfs" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A>Extract Data from PDFs</H3> <P>&nbsp;</P> <P>We are ready to process the blobs now! Here we will call<SPAN>&nbsp;</SPAN><CODE>list_blobs</CODE><SPAN>&nbsp;</SPAN>to get a list of blobs in the<SPAN>&nbsp;</SPAN><CODE>raw</CODE><SPAN>&nbsp;</SPAN>container. Then we will loop through each blob, call the<SPAN>&nbsp;</SPAN><CODE>begin_recognize_invoices_from_url</CODE><SPAN>&nbsp;</SPAN>to extract the data from the PDF. Then we have our helper method to print the results. Once we have extracted the data from the PDF we will<SPAN>&nbsp;</SPAN><CODE>upload_blob</CODE><SPAN>&nbsp;</SPAN>to the<SPAN>&nbsp;</SPAN><CODE>processed</CODE><SPAN>&nbsp;</SPAN>folder and<SPAN>&nbsp;</SPAN><CODE>delete_blob</CODE><SPAN>&nbsp;</SPAN>from the raw folder.</P> <P>&nbsp;</P> <DIV class="highlight highlight-source-python"> <PRE><SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s">"<SPAN class="pl-cce">\n</SPAN>Processing blobs..."</SPAN>) <SPAN class="pl-s1">blob_list</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">raw_container_client</SPAN>.<SPAN class="pl-en">list_blobs</SPAN>() <SPAN class="pl-k">for</SPAN> <SPAN class="pl-s1">blob</SPAN> <SPAN class="pl-c1">in</SPAN> <SPAN class="pl-s1">blob_list</SPAN>: <SPAN class="pl-s1">invoiceUrl</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s">f'<SPAN class="pl-s1"><SPAN class="pl-kos">{</SPAN>invoiceUrlBase<SPAN class="pl-kos">}</SPAN></SPAN>/<SPAN class="pl-s1"><SPAN class="pl-kos">{</SPAN>blob.name<SPAN class="pl-kos">}</SPAN></SPAN>'</SPAN> <SPAN class="pl-en">print</SPAN>(<SPAN class="pl-s1">invoiceUrl</SPAN>) <SPAN class="pl-s1">poller</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">form_recognizer_client</SPAN>.<SPAN class="pl-en">begin_recognize_invoices_from_url</SPAN>(<SPAN class="pl-s1">invoiceUrl</SPAN>) <SPAN class="pl-c"># Get results</SPAN> <SPAN class="pl-s1">invoices</SPAN> <SPAN class="pl-c1">=</SPAN> <SPAN class="pl-s1">poller</SPAN>.<SPAN class="pl-en">result</SPAN>() <SPAN class="pl-c"># Print results</SPAN> <SPAN class="pl-en">print_result</SPAN>(<SPAN class="pl-s1">invoices</SPAN>, <SPAN class="pl-s1">blob</SPAN>.<SPAN class="pl-s1">name</SPAN>) <SPAN class="pl-c"># Copy blob to processed</SPAN> <SPAN class="pl-s1">processed_container_client</SPAN>.<SPAN class="pl-en">upload_blob</SPAN>(<SPAN class="pl-s1">blob</SPAN>, <SPAN class="pl-s1">blob</SPAN>.<SPAN class="pl-s1">blob_type</SPAN>, <SPAN class="pl-s1">overwrite</SPAN><SPAN class="pl-c1">=</SPAN><SPAN class="pl-c1">True</SPAN>) <SPAN class="pl-c"># Delete blob from raw now that its processed</SPAN> <SPAN class="pl-s1">raw_container_client</SPAN>.<SPAN class="pl-en">delete_blob</SPAN>(<SPAN class="pl-s1">blob</SPAN>)</PRE> </DIV> <P>Each result should look similar to this for the above invoice example:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="pythonresult.png" style="width: 546px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264371iA3116FA86E09C32D/image-dimensions/546x131?v=v2" width="546" height="131" role="button" title="pythonresult.png" alt="pythonresult.png" /></span></P> <P>&nbsp;</P> <P>The prebuilt invoices model worked great for our invoices so we don't need to train a customized Form Recognizer model to improve our results. But what if we did and what if we didn't know how to code?! You can still leverage all this awesomeness in AI Builder with Power Automate without writing any code. We will take a look at this same example in Power Automate next.</P> <H2>&nbsp;</H2> <H2><A id="user-content-use-form-recognizer-with-ai-builder-in-power-automate" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A><FONT size="5">Use Form Recognizer with AI Builder in Power Automate</FONT></H2> <P>&nbsp;</P> <P>You can achieve these same results using no code with Form Recognizer in AI Builder with Power Automate. Lets take a look at how we can do that.</P> <H3>&nbsp;</H3> <H3><A id="user-content-create-a-new-flow" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A>Create a New Flow</H3> <P>&nbsp;</P> <UL> <LI>Log in to<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">Power Automate</A></LI> <LI>Click<SPAN>&nbsp;</SPAN><CODE>Create</CODE><SPAN>&nbsp;</SPAN>then click<SPAN>&nbsp;</SPAN><CODE>Scheduled Cloud Flow</CODE>. You can trigger Power Automate flows in a variety of ways so keep in mind that you may want to select a different trigger for your project.</LI> <LI>Give the Flow a name and select the schedule you would like the flow to run on.</LI> </UL> <H3>&nbsp;</H3> <H3><A id="user-content-connect-to-blob-storage-1" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A>Connect to Blob Storage</H3> <P>&nbsp;</P> <UL> <LI>Click<SPAN>&nbsp;</SPAN><CODE>New Step</CODE></LI> <LI><CODE>List blobs</CODE><SPAN>&nbsp;</SPAN>Step <UL> <LI>Search for<SPAN>&nbsp;</SPAN><CODE>Azure Blob Storage</CODE><SPAN>&nbsp;</SPAN>and select<SPAN>&nbsp;</SPAN><CODE>List blobs</CODE></LI> <LI>Select the ellipsis click<SPAN>&nbsp;</SPAN><CODE>Create new connection</CODE><SPAN>&nbsp;</SPAN>if your storage account isn't already connected <UL> <LI>Fill in the<SPAN>&nbsp;</SPAN><CODE>Connection Name</CODE>,<SPAN>&nbsp;</SPAN><CODE>Azure Storage Account name</CODE><SPAN>&nbsp;</SPAN>(the account you created), and the<SPAN>&nbsp;</SPAN><CODE>Azure Storage Account Access Key</CODE><SPAN>&nbsp;</SPAN>(which you can find in the resource keys in the Azure Portal)</LI> <LI>Then select<SPAN>&nbsp;</SPAN><CODE>Create</CODE></LI> </UL> </LI> <LI>Once the storage account is selected click the folder icon on the right of the list blobs options. You should see all the containers in the storage account, select<SPAN>&nbsp;</SPAN><CODE>raw</CODE>.</LI> </UL> </LI> </UL> <P>&nbsp;</P> <P>Your flow should look something like this:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="connecttoblob.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264373iCE2CBD509DA1B8DA/image-size/large?v=v2&amp;px=999" role="button" title="connecttoblob.png" alt="connecttoblob.png" /></span></P> <P>&nbsp;</P> <H3>Loop Through Blobs to Extract the Data</H3> <P>&nbsp;</P> <UL> <LI>Click the plus sign to create a new step</LI> <LI>Click<SPAN>&nbsp;</SPAN><CODE>Control</CODE><SPAN>&nbsp;</SPAN>then<SPAN>&nbsp;</SPAN><CODE>Apply to each</CODE></LI> <LI>Select the textbox and a list of blob properties will appear. Select the<SPAN>&nbsp;</SPAN><CODE>value</CODE><SPAN>&nbsp;</SPAN>property</LI> <LI>Next select<SPAN>&nbsp;</SPAN><CODE>add action</CODE><SPAN>&nbsp;</SPAN>from within the<SPAN>&nbsp;</SPAN><CODE>Apply to each</CODE><SPAN>&nbsp;</SPAN>Flow step.</LI> <LI>Add the<SPAN>&nbsp;</SPAN><CODE>Get blob content</CODE><SPAN>&nbsp;</SPAN>step: <UL> <LI>Search for<SPAN>&nbsp;</SPAN><CODE>Azure Blob Storage</CODE><SPAN>&nbsp;</SPAN>and select<SPAN>&nbsp;</SPAN><CODE>Get blob content</CODE></LI> <LI>Click the textbox and select the<SPAN>&nbsp;</SPAN><CODE>Path</CODE><SPAN>&nbsp;</SPAN>property. This will get the<SPAN>&nbsp;</SPAN><CODE>File content</CODE><SPAN>&nbsp;</SPAN>that we will pass into the Form Recognizer.</LI> </UL> </LI> <LI>Add the<SPAN>&nbsp;</SPAN><CODE>Process and save information from invoices</CODE><SPAN>&nbsp;</SPAN>step: <UL> <LI>Click the plus sign and then<SPAN>&nbsp;</SPAN><CODE>add new action</CODE></LI> <LI>Search for<SPAN>&nbsp;</SPAN><CODE>Process and save information from invoices</CODE></LI> <LI>Select the textbox and then the property<SPAN>&nbsp;</SPAN><CODE>File Content</CODE><SPAN>&nbsp;</SPAN>from the<SPAN>&nbsp;</SPAN><CODE>Get blob content</CODE><SPAN>&nbsp;</SPAN>section</LI> </UL> </LI> <LI>Add the<SPAN>&nbsp;</SPAN><CODE>Copy Blob</CODE><SPAN>&nbsp;</SPAN>step: <UL> <LI>Repeat the add action steps</LI> <LI>Search for<SPAN>&nbsp;</SPAN><CODE>Azure Blob Storage</CODE><SPAN>&nbsp;</SPAN>and select<SPAN>&nbsp;</SPAN><CODE>Copy Blob</CODE></LI> <LI>Select the<SPAN>&nbsp;</SPAN><CODE>Source url</CODE><SPAN>&nbsp;</SPAN>text box and select the<SPAN>&nbsp;</SPAN><CODE>Path</CODE><SPAN>&nbsp;</SPAN>property</LI> <LI>Select the<SPAN>&nbsp;</SPAN><CODE>Destination blob path</CODE><SPAN>&nbsp;</SPAN>and put<SPAN>&nbsp;</SPAN><CODE>/processed</CODE><SPAN>&nbsp;</SPAN>for the processed container</LI> <LI>Select<SPAN>&nbsp;</SPAN><CODE>Overwrite?</CODE><SPAN>&nbsp;</SPAN>dropdown and select<SPAN>&nbsp;</SPAN><CODE>Yes</CODE><SPAN>&nbsp;</SPAN>if you want the copied blob to overwrite blobs with the existing name.</LI> </UL> </LI> <LI>Add the<SPAN>&nbsp;</SPAN><CODE>Delete Blob</CODE><SPAN>&nbsp;</SPAN>step: <UL> <LI>Repeat the add action steps</LI> <LI>Search for<SPAN>&nbsp;</SPAN><CODE>Azure Blob Storage</CODE><SPAN>&nbsp;</SPAN>and select<SPAN>&nbsp;</SPAN><CODE>Delete Blob</CODE></LI> <LI>Select the<SPAN>&nbsp;</SPAN><CODE>Blob</CODE><SPAN>&nbsp;</SPAN>text box and select the<SPAN>&nbsp;</SPAN><CODE>Path</CODE><SPAN>&nbsp;</SPAN>property</LI> </UL> </LI> </UL> <P>The<SPAN>&nbsp;</SPAN><CODE>Apply to each</CODE><SPAN>&nbsp;</SPAN>block should look something like this:</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="applytoeachblock.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/264375i8CB49A960730C7EF/image-size/large?v=v2&amp;px=999" role="button" title="applytoeachblock.png" alt="applytoeachblock.png" /></span></P> <UL> <LI>Save and Test the Flow <UL> <LI>Once you have completed creating the flow save and test it out using the built in test features that are part of Power Automate.</LI> </UL> </LI> </UL> <P>This prebuilt model again worked great on our invoice data. However if you have a more complex dataset, use the AI Builder to label and create a customized machine learning model for your specific dataset. Read more about how to do that<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="nofollow noopener">here</A>.</P> <H2>&nbsp;</H2> <H2><A id="user-content-conclusion" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A><FONT size="5">Conclusion</FONT></H2> <P>&nbsp;</P> <P>We went over a fraction of the things that you can do with Form Recognizer so don't let the learning stop here! Check out the below highlights of new Form Recognizer features that were just announced and the additional doc links to dive deeper into what we did here.</P> <H3>&nbsp;</H3> <H3><A id="user-content-additional-resources" class="anchor" href="#" target="_blank" rel="noopener" aria-hidden="true"></A>Additional Resources</H3> <P><A href="#" target="_blank" rel="nofollow noopener">New Form Recognizer Features</A></P> <P><A href="#" target="_blank" rel="nofollow noopener">What is Form Recognizer?</A></P> <P><A href="#" target="_blank" rel="nofollow noopener">Quickstart: Use the Form Recognizer client library or REST API</A></P> <P><A href="#" target="_blank" rel="nofollow noopener">Tutorial: Create a form-processing app with AI Builder</A></P> <P><A href="#" target="_self">AI Developer Resources page</A></P> <P><A href="#" target="_self">AI Essentials video including Form Recognizer</A></P> Tue, 16 Mar 2021 16:43:56 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/extract-data-from-pdfs-using-form-recognizer-with-code-or/ba-p/2214299 cassieview 2021-03-16T16:43:56Z Model understanding with Azure Machine Learning https://gorovian.000webhostapp.com/?exam=t5/azure-ai/model-understanding-with-azure-machine-learning/ba-p/2201141 <P><EM>This post is co-authored by Mehrnoosh Sameki, Program Manager, Azure Machine Learning.</EM></P> <P>&nbsp;</P> <H2>Overview</H2> <P>&nbsp;</P> <P>Model interpretability and fairness are part of the ‘Understand’ pillar of Azure Machine Learning’s Responsible ML offerings. As machine learning becomes ubiquitous in decision-making from the end-user utilizing AI-powered applications to the business stakeholders using models to make data-driven decisions, it is necessary to provide tools at scale for model transparency and fairness.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" style="width: 626px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262968iCD258909811687E8/image-size/large?v=v2&amp;px=999" role="button" title="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" alt="3a4710a1-d3bb-42ba-bb8f-8603ebab4033.jpg" /></span></P> <P>&nbsp;</P> <P><SPAN style="font-family: inherit;">Explaining a machine learning model and performing fairness assessment is important for the following users:</SPAN></P> <P>&nbsp;</P> <UL> <LI>Data scientists and model evaluators - At training time to help them to understand their model predictions and assess the fairness of their AI systems, enhancing their ability to debug and improve models.</LI> <LI>Business stakeholders and auditors - To build trust with defined ML models and deploy them more confidently.</LI> </UL> <P>Customers like Scandinavian Airlines (SAS) and Ernst &amp; Young (EY) put interpretability and fairness packages to the test to be able to deploy models more confidently.</P> <P>&nbsp;</P> <UL> <LI><A href="#" target="_blank" rel="noopener">SAS used interpretability to confidently identify fraud</A> in its EuroBonus loyalty program. SAS data scientists could debug and verify model predictions using interpretability. They produced explanations about model behavior that gave stakeholders confidence in the machine learning models and assisted with meeting regulatory requirements.&nbsp;</LI> <LI><A href="#" target="_blank" rel="noopener">EY utilized fairness assessment and unfairness mitigation</A> techniques with real mortgage adjudication data to improve the fairness of loan decisions from having an accuracy disparity of 7 percent between men and women to less than 0.5 percent.</LI> </UL> <P>&nbsp;</P> <P>We are releasing enhanced experiences and feature additions for the interpretability and fairness toolkits in Azure Machine Learning, to empower more ML practitioners and teams to build trust with AI systems.</P> <P>&nbsp;</P> <H3><FONT size="6" color="#000000">Model understanding using interpretability and fairness toolkits</FONT></H3> <P>&nbsp;</P> <P>These two toolkits can be used together to understand model predictions and mitigate unfairness. For this demonstration, we shall take a look at a loan allocation scenario. Let’s say that the label indicates whether each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Tech blog diagram.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262695i2806C3064A5DEAB3/image-size/large?v=v2&amp;px=999" role="button" title="Tech blog diagram.jpg" alt="Tech blog diagram.jpg" /></span></P> <H1>&nbsp;</H1> <H1><FONT size="5">Identify your model's fairness issues</FONT></H1> <P>&nbsp;</P> <P>Our revamped fairness dashboard can help uncover the harm of allocation which leads to the model unfairly allocating loans among different demographic groups. The dashboard can additionally uncover harm of quality of service which leads to a model failing to provide the same quality of service to some people as they do to others. Using the fairness dashboard, you can identify if our model treats different demographics of sex unfairly.</P> <P>&nbsp;</P> <H3><FONT size="5">Dashboard configurations</FONT></H3> <P>&nbsp;</P> <P>When you first load the fairness dashboard, you need to configure it with desired settings, including:</P> <UL> <LI>selection of your sensitive demographic of choice (e.g., sex<A href="https://gorovian.000webhostapp.com/?exam=#_ftn1" target="_self" name="_ftnref1"><SPAN>[1]</SPAN></A>)</LI> <LI>model performance metric (e.g., accuracy)</LI> <LI>fairness metric (e.g., demographic parity difference).</LI> </UL> <P>&nbsp;</P> <H3><FONT size="5">Model assessment view</FONT></H3> <P>&nbsp;</P> <P>After setting the configurations, you will land on a model assessment view where you can see how the model is treating different demographic groups.</P> <P>&nbsp;</P> <P><IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/loan-allocation-fairness-toolkit/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Understanding loan allocation model’s fairness with the AzureML’s fairness toolkit - Microsoft Channel 9 Video"></IFRAME></P> <P>&nbsp;</P> <P>Our fairness assessment shows an 18.3% disparity in the selection rate (or demographic group difference). According to that insight, 18.3% more males are receiving qualifications for loan acceptance compared to females. Now that you’ve seen some unfairness indicators in your model, you can next use our interpretability toolkit to understand why your model is making such predictions.</P> <P>&nbsp;</P> <H2>Diagnose your model’s predictions</H2> <P>&nbsp;</P> <P>The new revamped interpretability dashboard greatly improves the user experience of the previous dashboard. In the loan allocation scenario, you can understand how a model treats female loan applicants differently than male loan applicants using the interpretability toolkit:</P> <P>&nbsp;</P> <P><IFRAME src="https://channel9.msdn.com/Shows/Docs-AI/loan-allocation-interpretability/player" width="960" height="540" frameborder="0" allowfullscreen="allowfullscreen" title="Understanding loan allocation with interpretability toolkit - Microsoft Channel 9 Video"></IFRAME></P> <P>&nbsp;</P> <OL> <LI><STRONG>Dataset cohort creation:</STRONG> You can slice and dice your data into subgroups (e.g., female vs. male vs. unspecified) and investigate or compare your model’s performance and explanations across them.</LI> <LI><STRONG style="font-family: inherit;">Model performance tab:</STRONG><SPAN style="font-family: inherit;"> With the predefined female and male cohorts, we can observe the different prediction distributions between males and female cohorts, with females experiencing higher probability rates of being rejected for a loan.</SPAN></LI> <LI><STRONG>Dataset explorer tab:</STRONG> Now that you have seen in the model performance tab how females are rejected at a higher rate than males, you can use the data explorer tab to observe the ground truth distribution between males and females. &nbsp;For males, the ground truth data is well balanced between those receiving a rejection or approval whereas, for females, the ground truth data is heavily skewed towards rejection thereby explaining how the model could come to associate the label ‘female’ with rejection.</LI> <LI><STRONG>Aggregate feature importance tab:</STRONG> Now we observe which top features contribute to the model’s overall prediction (also called global explanations) towards loan rejection. We sort our top feature importances by the Female cohort, which indicates that while the feature for “Sex” is the second most important feature to contribute towards the model’s predictions for individuals in the female cohort, they do not influence how the model makes predictions for individuals in the male cohort. The dependence plot for the feature “Sex” also shows that only the female group has positive feature importance towards the prediction of being rejected for a loan, whereas the model does not look at the feature “Sex” for males when making predictions.</LI> <LI><STRONG>Individual feature importance &amp; What-If tab:</STRONG> Drilling deeper into the model’s prediction for a specific individual (also called local explanations), we look at the individual feature importances for only the Female cohort. We select an individual who is at the threshold of being accepted for a loan by the model and observe which features contributed towards her prediction of being rejected. “Sex” is the second most important feature contributing towards the model prediction for this individual. The Individual Conditional Expectation (ICE) plot calculates how a perturbation for a given feature value across a range can impact its prediction. We select the feature “Sex” and can see that if this feature had been flipped to male, the probability of being rejected is lowered drastically. We create a new hypothetical What-If point from this individual data point and switch only the “Sex” from female to male, and observe that without changing any other feature related to financial competency, the model now predicts that this individual will have their loan application accepted.</LI> </OL> <P>Once some potential fairness issues are observed and diagnosed, you can move to mitigate those unfairness issues.</P> <P>&nbsp;</P> <H2>Mitigate unfairness issues in your model</H2> <P>&nbsp;</P> <P>The unfairness mitigation part is powered by the <A href="#" target="_blank" rel="noopener">Fairlearn</A> open-source package which includes two types of mitigation algorithms: <A href="#" target="_blank" rel="noopener">postprocessing algorithms</A> (<A href="#" target="_blank" rel="noopener">ThresholdOptimizer</A>) and <A href="#" target="_blank" rel="noopener">reduction algorithms</A> (<A href="#" target="_blank" rel="noopener">GridSearch</A>, <A href="#" target="_blank" rel="noopener">ExponentiatedGradient</A>). Both operate as “wrappers” around any standard classification or regression algorithm. <A href="#" target="_blank" rel="noopener">GridSearch</A>, for instance, treats any standard classification or regression algorithm as a black box, and iteratively (a) re-weight the data points and (b) retrain the model after each re-weighting. After 10 to 20 iterations, this process results in a model that satisfies the constraints implied by the selected fairness metric while maximizing model performance. <A href="#" target="_blank" rel="noopener">ThresholdOptimizer</A> on the other hand takes as its input a scoring function that underlies an existing classifier and identifies a separate threshold for each group to optimize the performance metric, while simultaneously satisfying the constraints implied by the selected fairness metric.</P> <P>&nbsp;</P> <P>The fairness dashboard also enables the comparison of multiple models, such as the models produced by different learning algorithms and different mitigation approaches. Bypassing the dominated models of GridSearch for instance, you can see the unmitigated model on the upper right side (with the highest accuracy and highest demographic parity difference) and can click on any of the mitigated models to observe them further. This allows you to examine trade-offs between performance and fairness.</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="model fairness comparison.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262696i74D857109F63B0D3/image-size/large?v=v2&amp;px=999" role="button" title="model fairness comparison.png" alt="model fairness comparison.png" /></span></P> <H2>&nbsp;</H2> <H2>Comparing results of unfairness mitigation</H2> <P>&nbsp;</P> <P>After applying the unfairness mitigation, we go back to the interpretability dashboard and compare the unmitigated model with the mitigated model. In the figure below, we see a more even probability distribution for the female cohort for the mitigated model on the right:</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Model interpretability before after.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262697i9D6C42F08A512188/image-size/large?v=v2&amp;px=999" role="button" title="Model interpretability before after.jpg" alt="Model interpretability before after.jpg" /></span></P> <P>&nbsp;</P> <P>Revisiting the fairness assessment dashboard, we also see a drastic decrease in demographic parity difference from 18.8% (unmitigated model) to 0.412% (mitigated model):</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Model fairness before after.jpg" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262698i60AA2DC0494DE9BF/image-size/large?v=v2&amp;px=999" role="button" title="Model fairness before after.jpg" alt="Model fairness before after.jpg" /></span></P> <P>&nbsp;</P> <P>&nbsp;</P> <H2>Saving model explanations and fairness metrics to Azure Machine Learning Run History</H2> <P>&nbsp;</P> <P>Azure Machine Learning’s (AzureML) interpretability and fairness toolkits can be run both locally and remotely. If run locally, the libraries will not contact any Azure services. Alternatively, you can run the algorithms remotely on AzureML compute and log all the explainability and fairness information into AzurML’s run history via the AzureML SDK to save and share them with other team members or stakeholders in AzureML studio.</P> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AML explanation.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262699i455620FEE621210A/image-size/large?v=v2&amp;px=999" role="button" title="AML explanation.png" alt="AML explanation.png" /></span></P> <P>&nbsp;</P> <P>Azure ML’s Automated ML supports explainability for its best model as well as on-demand explainability for any other models generated by Automated ML.</P> <P>&nbsp;</P> <H2>Learn more</H2> <P>&nbsp;</P> <P><A href="#" target="_blank" rel="noopener">Explore this scenario</A> and other sample notebooks in the Azure Machine Learning sample notebooks GitHub.</P> <P>Learn more about the <A href="#" target="_blank" rel="noopener">Azure Machine Learning service</A><SPAN>.</SPAN></P> <P>Learn more about <A href="#" target="_blank" rel="noopener">Responsible ML offerings in Azure Machine Learning</A>.</P> <P>Learn more about <A href="#" target="_blank" rel="noopener">interpretability</A> and <A href="#" target="_blank" rel="noopener">fairness</A> concepts and see documentation on how-to guides for using <A href="#" target="_blank" rel="noopener">interpretability</A> and <A href="#" target="_blank" rel="noopener">fairness</A> in Azure Machine Learning.</P> <P>Get started with a <A href="#" target="_blank" rel="noopener">free trial of the Azure Machine Learning service</A>.</P> <P>&nbsp;</P> <P><A href="https://gorovian.000webhostapp.com/?exam=#_ftnref1" target="_blank" rel="noopener" name="_ftn1"><SPAN>[1]</SPAN></A> This dataset is from the 1994 US Census Bureau Database where “sex” in the data was limited to binary categorizations.</P> Tue, 16 Mar 2021 19:05:14 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/model-understanding-with-azure-machine-learning/ba-p/2201141 mithigpe 2021-03-16T19:05:14Z Advance Resource Access Governance for AML https://gorovian.000webhostapp.com/?exam=t5/azure-ai/advance-resource-access-governance-for-aml/ba-p/2180520 <DIV class="lia-message-subject-wrapper lia-component-subject lia-component-message-view-widget-subject-with-options"> <DIV class="MessageSubject"> <DIV class="MessageSubjectIcons ">&nbsp;</DIV> </DIV> </DIV> <DIV class="lia-message-body-wrapper lia-component-message-view-widget-body"> <DIV id="bodyDisplay" class="lia-message-body"> <DIV class="lia-message-body-content"> <P>Access control is a fundamental building block for enterprise customers, where protecting assets at various levels is absolutely necessary to ensure that only the relevant people with certain positions of authority are given access with different privileges. This is more so prevalent in machine learning, where data is absolutely essential in building ML models, and companies are highly cautious about how the data is accessed and managed, especially with the introduction of GDPR.&nbsp; We are seeing an increasing number of customers seeking for explicit control of not only the data, but various stages of the machine learning lifecycle, starting from experimentation and all the way to operationalization. Assets such as generated models, cluster creation and model deployment require to be governed to ensure that controls are in line with the company’s policy.</P> <P>&nbsp;</P> <P><SPAN>Azure traditionally provides Role-based Access Control [1], which helps to manage access to resources; who can access these and what they can access.&nbsp; This is primarily achieved via the concept of roles.&nbsp; A role defines a collection of permissions.</SPAN></P> <P>&nbsp;</P> <P><STRONG><FONT size="5">Existing Roles in AML</FONT></STRONG></P> <P>&nbsp;</P> <P>Azure Machine Learning provides three roles [3] for enterprise customers to provision as a coarse-grained access control, which is designed for simplicity in mind.&nbsp; The first role (Owner) has the highest level of privileges, that grants full control of the workspace. &nbsp;This is followed by a Contributor, which is a bit more restricted role that prevents users from changing role assignment. Reader having the most restrictive permissions and is typically read or view only (see figure 1 below).&nbsp;&nbsp;</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rbac-3.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262754iAB96302DF84B6F1D/image-size/large?v=v2&amp;px=999" role="button" title="rbac-3.png" alt="rbac-3.png" /></span><BR /> <P>&nbsp;</P> <P class="lia-align-center">&nbsp;Figure 1 - Existing AML roles</P> <P>&nbsp;</P> <P><SPAN>What we have found with the customers is that while Coarse-grained Access Control immensely simplifies the management of the roles, and works quite well with a small team, primarily working in the experimentation environment.&nbsp; However, when a company decides to operationalize the ML work, especially in the enterprise space, these roles become far too broad, and too simplistic.&nbsp;&nbsp; In the enterprise space, the deployment tends to have several stages (such as dev, test, pre-prod, prod, etc.), and require various skillset (data scientist, data engineer, etc.) with a greater control in each stage.&nbsp; For example, a Data Scientist may not operate in the production environment. A Data Engineer can only provision resources and should not have the ability to commission and decommission training clusters. Such governance policies are crucial for companies to be enforced and monitored to maintain integrity of their business and IT processes.</SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>Unfortunately, such requirements cannot be captured with the existing roles. Enterprise needs a better mechanism to define policies for various assets in AML to satisfy their business specific requirements.</SPAN></P> <P><SPAN>&nbsp;</SPAN></P> <P><SPAN>This is where the new exciting feature of advanced Role-based Access Control really shines. It is based on Fine-grained Access Control at component level (see figure 2) with a number of pre-built out of the box roles, plus the ability to create custom roles that can capture more complex governance access processes and enforce them. &nbsp;</SPAN></P> <P>&nbsp;</P> <P><FONT size="5"><STRONG>Advance Fine-grained Role-based Access Control</STRONG></FONT></P> <P>&nbsp;</P> <P>The new advance Role-based Access Control feature of AML is really going to solve many of the enterprise problems around the ability to restrict or grant user permissions for various components.&nbsp; Azure AML currently defines 16 components&nbsp; with varying permissions.</P> <BR /> <P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="aml-components.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260370iC2929C8E66E43458/image-size/large?v=v2&amp;px=999" role="button" title="aml-components.png" alt="aml-components.png" /></span></P> <P>&nbsp;</P> <P class="lia-align-center">Figure 2 - Components Level RBAC</P> <P>&nbsp;</P> <P>Each component defines a list of actions such as read, write, delete, etc.&nbsp; These actions can then be amalgamated together to create a custom specific role. To illustrate this with an example of a list of actions currently available for a Datastore component (see Figure 3 below).</P> <P>&nbsp;</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="policy-1.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262756iB724D52251F4900B/image-size/large?v=v2&amp;px=999" role="button" title="policy-1.png" alt="policy-1.png" /></span> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-center">Figure 3 - Datastore Actions</P> <P>&nbsp;</P> <P>A datastore along with Dataset are important concepts in Azure Machine Learning, &nbsp;since they provide access to various data sources, with lineage and tracking ability.&nbsp; Many enterprises have built global Datalake that contain terabytes of data which can contain highly sensitive information. Companies are quite protective of who can access these data, along with various business justifications for how these data are being accessed/used. It is therefore imperative that a tighter access control is mandated for a specific role, such as a Data Engineer to accomplish such a task.</P> <P>&nbsp;</P> <P>Fortunately, AML advance access control provide custom roles.&nbsp; to cater for their company specific access control, which may be a hybrid of these roles.&nbsp; For such requirements, Azure caters for custom roles.</P> <P>&nbsp;</P> </DIV> <DIV class="lia-message-body-content"> <P><STRONG><FONT size="5">Custom Role</FONT></STRONG></P> <P>&nbsp;</P> <P>Custom role [4] allows creation of Fine-grained Access Control on various components, such as the workspace, datastore, etc.&nbsp;</P> <UL> <LI>Can be any combination of data or control plane actions that AzureML+AISC support.</LI> <LI>Useful for creating scoped roles to a specific action like an MLOps Engineer</LI> </UL> <P>These controls are defined in a JSON policy definition, for example.</P> <P>&nbsp;</P> <LI-CODE lang="json">{ "Name": "Data Scientist", "IsCustom": true, "Description": "Can run experiment but can't create or delete datastore.", "Actions": ["*"], "NotActions": [ "Microsoft.MachineLearningServices/workspaces/*/delete", "Microsoft.MachineLearningServices/workspaces/ datastores/write", "Microsoft.MachineLearningServices/workspaces/ datastores /delete", “Microsoft.MachineLearningServices/workspaces/datastores/write”, "Microsoft.Authorization/*/write" ], "AssignableScopes": [ "/subscriptions/&lt;subscription_id&gt;/resourceGroups/&lt;resource_group_name&gt;/providers/Microsoft.MachineLearningServices/workspaces/&lt;workspace_name&gt;" ] } </LI-CODE> <P>&nbsp;</P> <P>The above code defines a Data Scientist who can run an experiment but cannot create or delete a Datastore. This role can be created using the Azure CLI (az role definition create -role-definition filename), however, the CLI ML extension needs to be installed first. &nbsp;</P> <P>&nbsp;</P> <H2 id="toc-hId--1063417684">Role Operation Workflow</H2> <P>&nbsp;</P> <P>In an organization, the following activities are to be undertaken by various role owners.&nbsp;</P> <UL> <LI>Sub admin comes in for an enterprise and requests Amlcompute quota</LI> <LI>They create an RG and a workspace for a specific team, and also set workspace level quota</LI> <LI>The team lead (aka workspace admin), comes in and starts creating compute within the quota that the sub admin defined for that workspace</LI> <LI>Data Scientist comes in and uses the compute that workspace admin created for them (clusters or instances).</LI> </UL> <P>&nbsp;</P> <H2 id="toc-hId-1424095149">Roles for Enterprise</H2> <P>&nbsp;</P> <P>AML provides a single environment for doing end-to-end experimentation to operationalization.&nbsp; For a start-up this is really useful as they tend to operate in a very agile manner, where many iterations can happen in a short period of time and having the ability to quickly move from ideation to production really reduces their cycle time.&nbsp; Unfortunately, this may not be the case for the enterprise customers, where they would typically be using either two or three environments to carry out their production workload such as: Dev, QA and Prod.&nbsp;</P> <P>&nbsp;</P> <P>Dev is used to do the experimentation, while QA is catered for satisfying various functional and non-functional requirements, followed by Prod for deployment into the production for consumer usage.</P> <P>&nbsp;</P> <P>The environments would also have various roles to carry out different activities, such as Data Scientist, Data Engineer and MLOps Engineer (see figure 8 below).</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="role-4.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/262758iDDE94C2D4806F6B4/image-size/large?v=v2&amp;px=999" role="button" title="role-4.png" alt="role-4.png" /></span> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-center">Figure 8 - Enterprise Roles</P> <P>&nbsp;</P> <P>A Data Scientist normally operates in the Dev environment and has full access to all the permissions related to carrying out experiments, such as provisioning training clusters, building models, etc. While some permissions are granted in the QA environment, primarily related model testing and performance, and very minimal access to the Prod environment, mainly telemetry (see below Table 1).&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P>A Data Engineer on the other hand primarily operates in the Build and QA environment. The main focus is related to the data handling, such as data loading, doing some data wrangling, etc.&nbsp; They have restricted access in the Prod environment.</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Mufajjul_Ali_10-1614737951507.png" style="width: 864px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260367iECC3583AFC821F5F/image-dimensions/864x345?v=v2" width="864" height="345" role="button" title="Mufajjul_Ali_10-1614737951507.png" alt="Mufajjul_Ali_10-1614737951507.png" /></span> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-center">Table 1 - Role/environment Matrix</P> <P>&nbsp;</P> <P>An MLOps Engineer has some permission in the Dev environment, but full permissions in the QA and Prod.&nbsp; This is because an MlOPs Engineer is tasked with building the pipeline, gluing things together, and ultimately deploying models in production.</P> <P>&nbsp;</P> <P>The interesting part is how do all these roles and environments and other components fit together in Azure to provide the much-needed access governance for the enterprise customers.&nbsp;</P> <P>&nbsp;</P> <H2 id="toc-hId--517044038">Enterprise AML Roles Deployment</H2> <P>&nbsp;</P> <P>It is impressive for enterprises to be able to model these complex roles/environments mapping as shown in Table one.&nbsp; Fortunately these can be achieved in Azure using a combination of AD groups, roles and resource groups.</P> <P>&nbsp;</P> <span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Mufajjul_Ali_11-1614737951524.png" style="width: 724px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/260368i241720729954A0AF/image-dimensions/724x470?v=v2" width="724" height="470" role="button" title="Mufajjul_Ali_11-1614737951524.png" alt="Mufajjul_Ali_11-1614737951524.png" /></span> <P>&nbsp;</P> <P>&nbsp;</P> <P>&nbsp;</P> <P class="lia-align-center">Figure 9 - Enterprise AML Roles Deployment</P> <P>&nbsp;</P> <P>Fundamentally, Azure Active Directory groups play a major part in gluing all these components together to make it functional.&nbsp;</P> <P>&nbsp;</P> <P>First step is to group the users specific to role(s) in a “Role AD group” for a given persona (DS, DE, etc.,). Then assign roles with various RBAC actions (Data Writer, MLContributor, etc.) to this AD group.&nbsp; All these users will now inherit the permissions specific to this role(s).&nbsp; Multiple AD groups will be created for different persona roles.</P> <P><STRONG>&nbsp;</STRONG></P> <P>Separate AD groups (‘AD group for Environment’) are created for each environment (i.e. Dev, QA and Prod), the Role AD Groups are added to these Environment AD groups.&nbsp; This creates a mapping of users belonging to a specific role persona with given permissions to an environment.</P> <P>&nbsp;</P> <P>The ‘AD group for Environment’ is then assigned to a resource group, which contains a specific AML Workspace.&nbsp; This ensures that the role permissions assigned to users will be enforced at the workspace level.&nbsp;</P> <P>&nbsp;</P> <H2><STRONG>Summary</STRONG></H2> <P>&nbsp;</P> <P>In this blog, we have discussed the new advance Role-based Access Control, and how it is being applied in a complex enterprise with various environments with different user personas.</P> <P>&nbsp;</P> <P>The important point to note is the flexibility that comes with this new feature which can operate at any of the 16 AML components and be able to define Fine-grained Access Control for each through custom roles, and out of box four roles which should be sufficient for the majority of the customers.&nbsp;<STRONG>&nbsp;</STRONG></P> <H2 id="toc-hId-1970468795">&nbsp;</H2> <P><SPAN>References</SPAN></P> <P><STRONG>&nbsp;</STRONG></P> <P>[1]<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">https://docs.microsoft.com/en-us/azure/role-based-access-control/overview</A></P> <P>[2]<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">https://azure.microsoft.com/en-gb/services/machine-learning/</A></P> <P>[3]<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">https://docs.microsoft.com/en-us/azure/machine-learning/concept-enterprise-security</A></P> <P>[4]<SPAN>&nbsp;</SPAN><A href="#" target="_blank" rel="noopener noreferrer">https://docs.microsoft.com/en-us/azure/role-based-access-control/custom-roles</A></P> <P>&nbsp;</P> <P>Additional Links:</P> <P>&nbsp;</P> <DIV><A tabindex="-1" title="https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles" href="#" target="_blank" rel="noreferrer noopener">https://docs.microsoft.com/en-us/azure/machine-learning/how-to-assign-roles</A></DIV> <DIV>&nbsp;</DIV> <P>&nbsp;</P> <P>co-author:<SPAN>&nbsp;</SPAN><A href="https://gorovian.000webhostapp.com/?exam=t5/user/viewprofilepage/user-id/195402" target="_blank" rel="noopener">@Nishank Gupt and @John Wu</A></P> </DIV> </DIV> </DIV> Thu, 11 Mar 2021 09:28:08 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/advance-resource-access-governance-for-aml/ba-p/2180520 mufy 2021-03-11T09:28:08Z Improving collaboration and productivity in Azure Machine Learning https://gorovian.000webhostapp.com/?exam=t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906 <P><EM>This post is co-authored by Sharon Xu Program Manager, Azure Notebooks.</EM></P> <P>&nbsp;</P> <P>Today we are very proud to announce the next set of productivity features and improvements for the notebook experience. Since <A href="https://gorovian.000webhostapp.com/?exam=t5/azure-ai/bringing-intellisense-collaboration-and-more-to-jupyter/ba-p/1362009" target="_blank" rel="noopener">we announced the GA release</A> of Notebooks in Azure Machine Learning (Azure ML), <SPAN>we have learned a lot from our customers</SPAN>. Over the past few months, we have incrementally improved the notebook experience while simultaneously contributing back to <A href="#" target="_blank" rel="noopener">the open source nteract project</A>. The Azure ML team recently released a robust set of new functionalities designed to improve data scientist productivity and collaboration in Azure ML Notebooks.</P> <H2>&nbsp;</H2> <H2>Data scientist &amp; Developer Productivity</H2> <P>We have spoken to several data scientists and developers to fully understand the additional features needed to improve productivity while developing machine learning projects. From feedback, we have found that users constantly needed the following enhancements to speed up their workflow: a clear indication that a cell has finished running, a way to templatize common code excerpts, a way to check variable contents, and more. The following list is a culmination of the most highly requested productivity features:</P> <P>&nbsp;</P> <UL> <LI>Cell Status Bar. The status bar located in each cell indicates the cell state: whether a cell has been queued, successfully executed, or run into an error. The status bar also displays the execution time of the last run.</LI> <LI><A href="#" target="_blank" rel="noopener">Variable Explorer.</A> provides a quick glance into the data type, size, and contents of your variables and dataframes, allowing for quicker and simpler debugging.</LI> </UL> <P>&nbsp;</P> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_5-1614125127829.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257237iB8D804F9493E69DB/image-size/large?v=v2&amp;px=999" role="button" title="abeomor_5-1614125127829.png" alt="abeomor_5-1614125127829.png" /></span></P> <P>&nbsp;</P> <P>Figure 1: (1) Cell status bar (2) Variable explorer</P> <P>&nbsp;</P> <UL> <LI>Notebook snippets (preview). Common Azure ML code excerpts are now available at your fingertips. Navigate to the code snippets panel, accessible via the toolbar, or activate the in-code snippets menu using Ctrl + Space.&nbsp;</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_4-1614125123189.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257236iFDAB2F193BD4424E/image-size/large?v=v2&amp;px=999" role="button" title="abeomor_4-1614125123189.png" alt="abeomor_4-1614125123189.png" /></span></P> <P>&nbsp;</P> <P>Figure 2 (1) Notebook snippets panel, showing all useful snippets</P> <P>&nbsp;</P> <UL> <LI><A href="#" target="_blank" rel="noopener">IntelliCode</A>. IntelliCode provides intelligent auto-completion suggestions using an ML algorithm that analyzes the context of your notebook code. IntelliCode suggestions are designated with a star.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_3-1614125118045.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257235i8B5BCB3FB1176BA5/image-size/large?v=v2&amp;px=999" role="button" title="abeomor_3-1614125118045.png" alt="abeomor_3-1614125118045.png" /></span></P> <P>&nbsp;</P> <P>Figure 3: IntelliCode in Azure ML Notebooks</P> <P>&nbsp;</P> <UL> <LI>Keyboard shortcuts with full Jupyter parity. Azure ML now supports all the <A href="#" target="_blank" rel="noopener">keyboard shortcuts available in Jupyter</A> and more.</LI> <LI><A href="#" target="_blank" rel="noopener">Table of Contents.</A> For large notebooks, the Table of Contents panel then allows you to navigate to the desired section. The sections of the notebook are designated by the Markdown headers.</LI> <LI>Markdown Side-by-side Editor in Notebooks. Within each notebook, the new side-by-side editor allows you to view the rendered results of your Markdown cells directly in your notebook editing.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_2-1614125111883.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257234iAE1377BE0412F326/image-size/large?v=v2&amp;px=999" role="button" title="abeomor_2-1614125111883.png" alt="abeomor_2-1614125111883.png" /></span></P> <P>&nbsp;</P> <P>Figure 4: &nbsp;(1) Table of content pane (2) Markdown side by side</P> <H2>&nbsp;</H2> <H2>Collaboration and Sharing</H2> <P>An increasing number of data scientists and developers are creating notebooks collaboratively and sharing these notebooks across their team We heard feedback that most users feel like they are missing adequate tools to edit notebooks simultaneously or share their notebooks with a broader audience. Users often resort to screen shares and calls to complete or present work within a notebook. We recently just release a few new features to help combat some of these issues:</P> <P>&nbsp;</P> <UL> <LI>Co-editing (preview). Co-editing makes collaboration easier than ever. The notebook can now be shared by sending the notebook URL, allowing multiple users to edit the notebook in real-time.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_1-1614125073756.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257232i6D39A594790A76B9/image-size/large?v=v2&amp;px=999" role="button" title="abeomor_1-1614125073756.png" alt="abeomor_1-1614125073756.png" /></span></P> <P>&nbsp;</P> <P>Figure 5: Live Co-editing in Azure ML</P> <P>&nbsp;</P> <UL> <LI><A href="#" target="_blank" rel="noopener">Export Notebook as Python, LaTeX or HTML</A>. When you feel satisfied with the results from your notebook and ready to present to your colleagues, you can export the notebook to various formats for easy sharing. LaTeX, HTML, and .py are currently supported.</LI> </UL> <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="abeomor_0-1614125062082.png" style="width: 999px;"><img src="https://techcommunity.microsoft.com/t5/image/serverpage/image-id/257231iFD15BDAC14ECD764/image-size/large?v=v2&amp;px=999" role="button" title="abeomor_0-1614125062082.png" alt="abeomor_0-1614125062082.png" /></span></P> <P>&nbsp;</P> <P>Figure 6: Export Notebooks as Python and more in Azure ML</P> <H2>&nbsp;</H2> <H2>Get Started Today</H2> <P>To begin using these features in Azure ML Notebooks, you will first need to <A href="#" target="_blank" rel="noopener">create an Azure Machine Learning</A>. Your Azure ML workspace serves as your one-stop-shop for all your machine learning needs, where you can create and share all your machine learning assets.</P> <P>Once you have your workspace set up, you can get started using <A href="#" target="_blank" rel="noopener">all the features in the Azure ML Notebooks experience</A><SPAN>.</SPAN> The notebooks experience aims to provide you with an integrated suite of data science tools. Users can start working with a highly productive and collaborative Jupyter notebook editor directly in their workspace as well as quickly access other ML assets such as experiment details, datasets, models, and more.</P> <P>With the addition of this host of features, notebooks in Azure ML aims to improve every aspect of your development needs – collaboration, code editing, debugging. Give these features a try and <A href="#" target="_self">leave your feedback</A>. The feedback provided by our community is what drives us to improve and build new features.&nbsp; As we continue to push out new releases, keep an eye out, because the team has a few more exciting features coming out soon.</P> Wed, 10 Mar 2021 18:28:13 GMT https://gorovian.000webhostapp.com/?exam=t5/azure-ai/improving-collaboration-and-productivity-in-azure-machine/ba-p/2160906 abeomor 2021-03-10T18:28:13Z Integrating AI: Prototyping a No-Code solution with Power Apps https://gorovian.000webhostapp.com/?exam=t5/azure-ai/integrating-ai-prototyping-a-no-code-solution-with-power-apps/ba-p/2189550 <P><SPAN data-k